no code implementations • 19 Mar 2024 • Anh Bui, Vy Vo, Tung Pham, Dinh Phung, Trung Le
There has long been plenty of theoretical and empirical evidence supporting the success of ensemble learning.
1 code implementation • 28 Nov 2023 • Quan Dao, Binh Ta, Tung Pham, Anh Tran
We suggest replacing the UOT-based generative model for GAN in DDGAN to learn the backward diffusion process.
Ranked #3 on Image Generation on STL-10
no code implementations • 16 Nov 2023 • Ngoc N. Tran, Lam Tran, Hoang Phan, Anh Bui, Tung Pham, Toan Tran, Dinh Phung, Trung Le
Contrastive learning (CL) is a self-supervised training paradigm that allows us to extract meaningful features without any label information.
1 code implementation • 1 Oct 2023 • Quang H. Nguyen, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D. Doan
Recent works have shown that deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify.
Ranked #1 on Image Classification on delete
no code implementations • 29 Sep 2023 • Tuan Truong, Hoang-Phi Nguyen, Tung Pham, Minh-Tuan Tran, Mehrtash Harandi, Dinh Phung, Trung Le
Motivated by this analysis, we introduce our algorithm, Riemannian Sharpness-Aware Minimization (RSAM).
no code implementations • 17 May 2023 • Ngoc N. Tran, Son Duong, Hoang Phan, Tung Pham, Dinh Phung, Trung Le
Self-supervised learning aims to extract meaningful features from unlabeled data for further downstream tasks.
no code implementations • 24 Aug 2021 • Khang Le, Dung Le, Huy Nguyen, Dat Do, Tung Pham, Nhat Ho
When the metric is the inner product, which we refer to as inner product Gromov-Wasserstein (IGW), we demonstrate that the optimal transportation plans of entropic IGW and its unbalanced variant are (unbalanced) Gaussian distributions.
2 code implementations • 22 Aug 2021 • Khai Nguyen, Dang Nguyen, The-Anh Vu-Le, Tung Pham, Nhat Ho
Mini-batch optimal transport (m-OT) has been widely used recently to deal with the memory issue of OT in large-scale applications.
no code implementations • 18 Aug 2021 • Khang Le, Huy Nguyen, Tung Pham, Nhat Ho
We demonstrate that the ApproxMPOT algorithm can approximate the optimal value of multimarginal POT problem with a computational complexity upper bound of the order $\tilde{\mathcal{O}}(m^3(n+1)^{m}/ \varepsilon^2)$ where $\varepsilon > 0$ stands for the desired tolerance.
no code implementations • NeurIPS 2021 • Khang Le, Huy Nguyen, Quang Nguyen, Tung Pham, Hung Bui, Nhat Ho
We consider robust variants of the standard optimal transport, named robust optimal transport, where marginal constraints are relaxed via Kullback-Leibler divergence.
2 code implementations • 11 Feb 2021 • Khai Nguyen, Dang Nguyen, Quoc Nguyen, Tung Pham, Hung Bui, Dinh Phung, Trung Le, Nhat Ho
To address these problems, we propose a novel mini-batch scheme for optimal transport, named Batch of Mini-batches Optimal Transport (BoMb-OT), that finds the optimal coupling between mini-batches and it can be seen as an approximation to a well-defined distance on the space of probability measures.
1 code implementation • ICCV 2021 • Trung Nguyen, Quang-Hieu Pham, Tam Le, Tung Pham, Nhat Ho, Binh-Son Hua
From this study, we propose to use sliced Wasserstein distance and its variants for learning representations of 3D point clouds.
2 code implementations • ICLR 2021 • Khai Nguyen, Son Nguyen, Nhat Ho, Tung Pham, Hung Bui
To improve the discrepancy and consequently the relational regularization, we propose a new relational discrepancy, named spherical sliced fused Gromov Wasserstein (SSFG), that can find an important area of projections characterized by a von Mises-Fisher distribution.
1 code implementation • ICLR 2021 • Khai Nguyen, Nhat Ho, Tung Pham, Hung Bui
Sliced-Wasserstein distance (SW) and its variant, Max Sliced-Wasserstein distance (Max-SW), have been used widely in the recent years due to their fast computation and scalability even when the probability measures lie in a very high dimensional space.
1 code implementation • ICML 2020 • Khiem Pham, Khang Le, Nhat Ho, Tung Pham, Hung Bui
We provide a computational complexity analysis for the Sinkhorn algorithm that solves the entropic regularized Unbalanced Optimal Transport (UOT) problem between two measures of possibly different masses with at most $n$ components.
no code implementations • 19 Sep 2017 • Tung Pham, Trung Le, Hang Dang
In this paper, we propose applying Stochastic Gradient Descent (SGD) framework to the first phase of support-based clustering for finding the domain of novelty and a new strategy to perform the clustering assignment.