Search Results for author: Christina Baek

Found 7 papers, 2 papers with code

Incremental Learning via Rate Reduction

no code implementations CVPR 2021 Ziyang Wu, Christina Baek, Chong You, Yi Ma

Current deep learning architectures suffer from catastrophic forgetting, a failure to retain knowledge of previously learned classes when incrementally trained on new classes.

Incremental Learning

Assessing Generalization of SGD via Disagreement

no code implementations ICLR 2022 Yiding Jiang, Vaishnavh Nagarajan, Christina Baek, J. Zico Kolter

We empirically show that the test error of deep networks can be estimated by simply training the same architecture on the same training set but with a different run of Stochastic Gradient Descent (SGD), and measuring the disagreement rate between the two networks on unlabeled test data.

Computational Benefits of Intermediate Rewards for Goal-Reaching Policy Learning

1 code implementation8 Jul 2021 Yuexiang Zhai, Christina Baek, Zhengyuan Zhou, Jiantao Jiao, Yi Ma

In both OWSP and OWMP settings, we demonstrate that adding {\em intermediate rewards} to subgoals is more computationally efficient than only rewarding the agent once it completes the goal of reaching a terminal state.

Hierarchical Reinforcement Learning Q-Learning +1

Efficient Maximal Coding Rate Reduction by Variational Forms

no code implementations CVPR 2022 Christina Baek, Ziyang Wu, Kwan Ho Ryan Chan, Tianjiao Ding, Yi Ma, Benjamin D. Haeffele

The principle of Maximal Coding Rate Reduction (MCR$^2$) has recently been proposed as a training objective for learning discriminative low-dimensional structures intrinsic to high-dimensional data to allow for more robust training than standard approaches, such as cross-entropy minimization.

Image Classification

Agreement-on-the-Line: Predicting the Performance of Neural Networks under Distribution Shift

1 code implementation27 Jun 2022 Christina Baek, Yiding Jiang, aditi raghunathan, Zico Kolter

In this paper, we show a similar but surprising phenomenon also holds for the agreement between pairs of neural network classifiers: whenever accuracy-on-the-line holds, we observe that the OOD agreement between the predictions of any two pairs of neural networks (with potentially different architectures) also observes a strong linear correlation with their ID agreement.

Model Selection

On the Joint Interaction of Models, Data, and Features

no code implementations7 Jun 2023 Yiding Jiang, Christina Baek, J. Zico Kolter

Thus, we believe this work provides valuable new insight into our understanding of feature learning.

Predicting the Performance of Foundation Models via Agreement-on-the-Line

no code implementations2 Apr 2024 Aman Mehra, Rahul Saxena, Taeyoun Kim, Christina Baek, Zico Kolter, aditi raghunathan

Recently, it was shown that ensembles of neural networks observe the phenomena ``agreement-on-the-line'', which can be leveraged to reliably predict OOD performance without labels.

Cannot find the paper you are looking for? You can Submit a new open access paper.