Search Results for author: Toan Tran

Found 18 papers, 8 papers with code

On Inference Stability for Diffusion Models

2 code implementations19 Dec 2023 Viet Nguyen, Giang Vu, Tung Nguyen Thanh, Khoat Than, Toan Tran

To minimize that gap, we propose a novel \textit{sequence-aware} loss that aims to reduce the estimation gap to enhance the sampling quality.

Denoising

KOPPA: Improving Prompt-based Continual Learning with Key-Query Orthogonal Projection and Prototype-based One-Versus-All

no code implementations26 Nov 2023 Quyen Tran, Lam Tran, Khoat Than, Toan Tran, Dinh Phung, Trung Le

Drawing inspiration from prompt tuning techniques applied to Large Language Models, recent methods based on pre-trained ViT networks have achieved remarkable results in the field of Continual Learning.

Continual Learning Meta-Learning

Robust Contrastive Learning With Theory Guarantee

no code implementations16 Nov 2023 Ngoc N. Tran, Lam Tran, Hoang Phan, Anh Bui, Tung Pham, Toan Tran, Dinh Phung, Trung Le

Contrastive learning (CL) is a self-supervised training paradigm that allows us to extract meaningful features without any label information.

Contrastive Learning

SigFormer: Signature Transformers for Deep Hedging

1 code implementation20 Oct 2023 Anh Tong, Thanh Nguyen-Tang, Dongeun Lee, Toan Tran, Jaesik Choi

To mitigate such difficulties, we introduce SigFormer, a novel deep learning model that combines the power of path signatures and transformers to handle sequential data, particularly in cases with irregularities.

Conditional Support Alignment for Domain Adaptation with Label Shift

no code implementations29 May 2023 Anh T Nguyen, Lam Tran, Anh Tong, Tuan-Duy H. Nguyen, Toan Tran

In this paper, we propose a novel conditional adversarial support alignment (CASA) whose aim is to minimize the conditional symmetric support divergence between the source's and target domain's feature representation distributions, aiming at a more helpful representation for the classification task.

Unsupervised Domain Adaptation

Stochastic Multiple Target Sampling Gradient Descent

1 code implementation4 Jun 2022 Hoang Phan, Ngoc Tran, Trung Le, Toan Tran, Nhat Ho, Dinh Phung

Furthermore, when analysing its asymptotic properties, SVGD reduces exactly to a single-objective optimization problem and can be viewed as a probabilistic version of this single-objective optimization problem.

Multi-Task Learning

Distributionally Robust Fair Principal Components via Geodesic Descents

no code implementations ICLR 2022 Hieu Vu, Toan Tran, Man-Chung Yue, Viet Anh Nguyen

Principal component analysis is a simple yet useful dimensionality reduction technique in modern machine learning pipelines.

Dimensionality Reduction Fairness

On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources

no code implementations NeurIPS 2021 Trung Phung, Trung Le, Long Vuong, Toan Tran, Anh Tran, Hung Bui, Dinh Phung

Domain adaptation (DA) benefits from the rigorous theoretical works that study its insightful characteristics and various aspects, e. g., learning domain-invariant representations and its trade-off.

Domain Generalization Transfer Learning

LASSO: Latent Sub-spaces Orientation for Domain Generalization

no code implementations29 Sep 2021 Long Tung Vuong, Trung Quoc Phung, Toan Tran, Anh Tuan Tran, Dinh Phung, Trung Le

To achieve a satisfactory generalization performance on prediction tasks in an unseen domain, existing domain generalization (DG) approaches often rely on the strict assumption of fixed domain-invariant features and common hypotheses learned from a set of training domains.

Domain Generalization

KL Guided Domain Adaptation

1 code implementation ICLR 2022 A. Tuan Nguyen, Toan Tran, Yarin Gal, Philip H. S. Torr, Atılım Güneş Baydin

A common approach in the domain adaptation literature is to learn a representation of the input that has the same (marginal) distribution over the source and the target domain.

Domain Adaptation

Domain Invariant Representation Learning with Domain Density Transformations

1 code implementation NeurIPS 2021 A. Tuan Nguyen, Toan Tran, Yarin Gal, Atılım Güneş Baydin

Domain generalization refers to the problem where we aim to train a model on data from a set of source domains so that the model can generalize to unseen target domains.

Domain Generalization Representation Learning

Bayesian Metric Learning for Robust Training of Deep Models under Noisy Labels

no code implementations1 Jan 2021 Toan Tran, Hieu Vu, Gustavo Carneiro, Hung Bui

Label noise is a natural event of data collection and annotation and has been shown to have significant impact on the performance of deep learning models regarding accuracy reduction and sample complexity increase.

General Classification Metric Learning +1

Learning Compositional Sparse Gaussian Processes with a Shrinkage Prior

no code implementations21 Dec 2020 Anh Tong, Toan Tran, Hung Bui, Jaesik Choi

Choosing a proper set of kernel functions is an important problem in learning Gaussian Process (GP) models since each kernel structure has different model complexity and data fitness.

Gaussian Processes Time Series +1

Bayesian Generative Active Deep Learning

no code implementations26 Apr 2019 Toan Tran, Thanh-Toan Do, Ian Reid, Gustavo Carneiro

Deep learning models have demonstrated outstanding performance in several problems, but their training process tends to require immense amounts of computational and human resources for training and labeling, constraining the types of problems that can be tackled.

Active Learning Data Augmentation

A Theoretically Sound Upper Bound on the Triplet Loss for Improving the Efficiency of Deep Distance Metric Learning

no code implementations CVPR 2019 Thanh-Toan Do, Toan Tran, Ian Reid, Vijay Kumar, Tuan Hoang, Gustavo Carneiro

Another approach explored in the field relies on an ad-hoc linearization (in terms of N) of the triplet loss that introduces class centroids, which must be optimized using the whole training set for each mini-batch - this means that a naive implementation of this approach has run-time complexity O(N^2).

Metric Learning Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.