Search Results for author: Thuan Nguyen

Found 10 papers, 6 papers with code

On neural and dimensional collapse in supervised and unsupervised contrastive learning with hard negative sampling

no code implementations9 Nov 2023 Ruijie Jiang, Thuan Nguyen, Shuchin Aeron, Prakash Ishwar

For a widely-studied data model and general loss and sample-hardening functions we prove that the Supervised Contrastive Learning (SCL), Hard-SCL (HSCL), and Unsupervised Contrastive Learning (UCL) risks are minimized by representations that exhibit Neural Collapse (NC), i. e., the class means form an Equianglular Tight Frame (ETF) and data from the same class are mapped to the same representation.

Contrastive Learning

A principled approach to model validation in domain generalization

1 code implementation2 Apr 2023 Boyang Lyu, Thuan Nguyen, Matthias Scheutz, Prakash Ishwar, Shuchin Aeron

Domain generalization aims to learn a model with good generalization ability, that is, the learned model should not only perform well on several seen domains but also on unseen domains with different data distributions.

Classification Domain Generalization +1

Trade-off between reconstruction loss and feature alignment for domain generalization

1 code implementation26 Oct 2022 Thuan Nguyen, Boyang Lyu, Prakash Ishwar, Matthias Scheutz, Shuchin Aeron

To deal with challenging settings in DG where both data and label of the unseen domain are not available at training time, the most common approach is to design the classifiers based on the domain-invariant representation features, i. e., the latent representations that are unchanged and transferable between domains.

Domain Generalization Transfer Learning

Supervised Contrastive Learning with Hard Negative Samples

1 code implementation31 Aug 2022 Ruijie Jiang, Thuan Nguyen, Prakash Ishwar, Shuchin Aeron

In this paper, motivated by the effectiveness of hard-negative sampling strategies in H-UCL and the usefulness of label information in SCL, we propose a contrastive learning framework called hard-negative supervised contrastive learning (H-SCL).

Contrastive Learning Self-Supervised Learning

Joint covariate-alignment and concept-alignment: a framework for domain generalization

1 code implementation1 Aug 2022 Thuan Nguyen, Boyang Lyu, Prakash Ishwar, Matthias Scheutz, Shuchin Aeron

Particularly, our framework proposes to jointly minimize both the covariate-shift as well as the concept-shift between the seen domains for a better performance on the unseen domain.

Concept Alignment Domain Generalization

Conditional entropy minimization principle for learning domain invariant representation features

2 code implementations25 Jan 2022 Thuan Nguyen, Boyang Lyu, Prakash Ishwar, Matthias Scheutz, Shuchin Aeron

Invariance-principle-based methods such as Invariant Risk Minimization (IRM), have recently emerged as promising approaches for Domain Generalization (DG).

Domain Generalization

Barycentric-alignment and reconstruction loss minimization for domain generalization

1 code implementation4 Sep 2021 Boyang Lyu, Thuan Nguyen, Prakash Ishwar, Matthias Scheutz, Shuchin Aeron

To bridge this gap between theory and practice, we introduce a new upper bound that is free of terms having such dual dependence, resulting in a fully optimizable risk upper bound for the unseen domain.

Domain Generalization Representation Learning

On the Uniqueness of Binary Quantizers for Maximizing Mutual Information

no code implementations7 Jan 2020 Thuan Nguyen, Thinh Nguyen

Furthermore, we show that an optimal quantizer (possibly with multiple thresholds) is the one with the thresholding vector whose elements are all the solutions of r(y) = r* for some constant r* > 0.

Communication-Channel Optimized Partition

no code implementations6 Jan 2020 Thuan Nguyen, Thinh Nguyen

Given an original discrete source X with the distribution p_X that is corrupted by noise to produce the noisy data Y with the given joint distribution p(X, Y).

Minimizing Impurity Partition Under Constraints

no code implementations31 Dec 2019 Thuan Nguyen, Thinh Nguyen

In general, the problem of finding a partition that minimizes a given impurity (loss function) is NP-hard.

Cannot find the paper you are looking for? You can Submit a new open access paper.