Search Results for author: Ruijie Jiang

Found 7 papers, 5 papers with code

On neural and dimensional collapse in supervised and unsupervised contrastive learning with hard negative sampling

no code implementations9 Nov 2023 Ruijie Jiang, Thuan Nguyen, Shuchin Aeron, Prakash Ishwar

For a widely-studied data model and general loss and sample-hardening functions we prove that the Supervised Contrastive Learning (SCL), Hard-SCL (HSCL), and Unsupervised Contrastive Learning (UCL) risks are minimized by representations that exhibit Neural Collapse (NC), i. e., the class means form an Equianglular Tight Frame (ETF) and data from the same class are mapped to the same representation.

Contrastive Learning

Supervised Contrastive Learning with Hard Negative Samples

1 code implementation31 Aug 2022 Ruijie Jiang, Thuan Nguyen, Prakash Ishwar, Shuchin Aeron

In this paper, motivated by the effectiveness of hard-negative sampling strategies in H-UCL and the usefulness of label information in SCL, we propose a contrastive learning framework called hard-negative supervised contrastive learning (H-SCL).

Contrastive Learning Self-Supervised Learning

Measure Estimation in the Barycentric Coding Model

1 code implementation28 Jan 2022 Matthew Werenski, Ruijie Jiang, Abiy Tasissa, Shuchin Aeron, James M. Murphy

Our first main result leverages the Riemannian geometry of Wasserstein-2 space to provide a procedure for recovering the barycentric coordinates as the solution to a quadratic optimization problem assuming access to the true reference measures.

Hard Negative Sampling via Regularized Optimal Transport for Contrastive Representation Learning

2 code implementations4 Nov 2021 Ruijie Jiang, Prakash Ishwar, Shuchin Aeron

We study the problem of designing hard negative sampling distributions for unsupervised contrastive representation learning.

Contrastive Learning Representation Learning

Interpretable contrastive word mover's embedding

1 code implementation1 Nov 2021 Ruijie Jiang, Julia Gouvea, Eric Miller, David Hammer, Shuchin Aeron

This paper shows that a popular approach to the supervised embedding of documents for classification, namely, contrastive Word Mover's Embedding, can be significantly enhanced by adding interpretability.

Automatic coding of students' writing via Contrastive Representation Learning in the Wasserstein space

no code implementations26 Nov 2020 Ruijie Jiang, Julia Gouvea, David Hammer, Eric Miller, Shuchin Aeron

This work is a step towards building a statistical machine learning (ML) method for achieving an automated support for qualitative analyses of students' writing, here specifically in score laboratory reports in introductory biology for sophistication of argumentation and reasoning.

BIG-bench Machine Learning Contrastive Learning +4

Cannot find the paper you are looking for? You can Submit a new open access paper.