no code implementations • 9 Nov 2023 • Ruijie Jiang, Thuan Nguyen, Shuchin Aeron, Prakash Ishwar
For a widely-studied data model and general loss and sample-hardening functions we prove that the Supervised Contrastive Learning (SCL), Hard-SCL (HSCL), and Unsupervised Contrastive Learning (UCL) risks are minimized by representations that exhibit Neural Collapse (NC), i. e., the class means form an Equianglular Tight Frame (ETF) and data from the same class are mapped to the same representation.
1 code implementation • 2 Apr 2023 • Boyang Lyu, Thuan Nguyen, Matthias Scheutz, Prakash Ishwar, Shuchin Aeron
Domain generalization aims to learn a model with good generalization ability, that is, the learned model should not only perform well on several seen domains but also on unseen domains with different data distributions.
1 code implementation • 26 Oct 2022 • Thuan Nguyen, Boyang Lyu, Prakash Ishwar, Matthias Scheutz, Shuchin Aeron
To deal with challenging settings in DG where both data and label of the unseen domain are not available at training time, the most common approach is to design the classifiers based on the domain-invariant representation features, i. e., the latent representations that are unchanged and transferable between domains.
1 code implementation • 31 Aug 2022 • Ruijie Jiang, Thuan Nguyen, Prakash Ishwar, Shuchin Aeron
In this paper, motivated by the effectiveness of hard-negative sampling strategies in H-UCL and the usefulness of label information in SCL, we propose a contrastive learning framework called hard-negative supervised contrastive learning (H-SCL).
1 code implementation • 1 Aug 2022 • Thuan Nguyen, Boyang Lyu, Prakash Ishwar, Matthias Scheutz, Shuchin Aeron
Particularly, our framework proposes to jointly minimize both the covariate-shift as well as the concept-shift between the seen domains for a better performance on the unseen domain.
2 code implementations • 25 Jan 2022 • Thuan Nguyen, Boyang Lyu, Prakash Ishwar, Matthias Scheutz, Shuchin Aeron
Invariance-principle-based methods such as Invariant Risk Minimization (IRM), have recently emerged as promising approaches for Domain Generalization (DG).
1 code implementation • 4 Sep 2021 • Boyang Lyu, Thuan Nguyen, Prakash Ishwar, Matthias Scheutz, Shuchin Aeron
To bridge this gap between theory and practice, we introduce a new upper bound that is free of terms having such dual dependence, resulting in a fully optimizable risk upper bound for the unseen domain.
no code implementations • 7 Jan 2020 • Thuan Nguyen, Thinh Nguyen
Furthermore, we show that an optimal quantizer (possibly with multiple thresholds) is the one with the thresholding vector whose elements are all the solutions of r(y) = r* for some constant r* > 0.
no code implementations • 6 Jan 2020 • Thuan Nguyen, Thinh Nguyen
Given an original discrete source X with the distribution p_X that is corrupted by noise to produce the noisy data Y with the given joint distribution p(X, Y).
no code implementations • 31 Dec 2019 • Thuan Nguyen, Thinh Nguyen
In general, the problem of finding a partition that minimizes a given impurity (loss function) is NP-hard.