Search Results for author: Christina Baek

Found 4 papers, 1 papers with code

Efficient Maximal Coding Rate Reduction by Variational Forms

no code implementations31 Mar 2022 Christina Baek, Ziyang Wu, Kwan Ho Ryan Chan, Tianjiao Ding, Yi Ma, Benjamin D. Haeffele

The principle of Maximal Coding Rate Reduction (MCR$^2$) has recently been proposed as a training objective for learning discriminative low-dimensional structures intrinsic to high-dimensional data to allow for more robust training than standard approaches, such as cross-entropy minimization.

Image Classification

Computational Benefits of Intermediate Rewards for Goal-Reaching Policy Learning

1 code implementation8 Jul 2021 Yuexiang Zhai, Christina Baek, Zhengyuan Zhou, Jiantao Jiao, Yi Ma

In both OWSP and OWMP settings, we demonstrate that adding {\em intermediate rewards} to subgoals is more computationally efficient than only rewarding the agent once it completes the goal of reaching a terminal state.

Hierarchical Reinforcement Learning Q-Learning

Assessing Generalization of SGD via Disagreement

no code implementations ICLR 2022 Yiding Jiang, Vaishnavh Nagarajan, Christina Baek, J. Zico Kolter

We empirically show that the test error of deep networks can be estimated by simply training the same architecture on the same training set but with a different run of Stochastic Gradient Descent (SGD), and measuring the disagreement rate between the two networks on unlabeled test data.

Incremental Learning via Rate Reduction

no code implementations CVPR 2021 Ziyang Wu, Christina Baek, Chong You, Yi Ma

Current deep learning architectures suffer from catastrophic forgetting, a failure to retain knowledge of previously learned classes when incrementally trained on new classes.

Incremental Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.