19 papers with code • 0 benchmarks • 0 datasets
These leaderboards are used to track progress in L2 Regularization
Continual learning has received a great deal of attention recently with several approaches being proposed.
This paper identifies a problem with the usual procedure for L2-regularization parameter estimation in a domain adaptation setting.
In this paper, we focus on online representation learning in non-stationary environments which may require continuous adaptation of model architecture.
Collaboratively Weighting Deep and Classic Representation via L2 Regularization for Image Classification
We propose a deep collaborative weight-based classification (DeepCWC) method to resolve this problem, by providing a novel option to fully take advantage of deep features in classic machine learning.
Importance-weighted risk minimization is a key ingredient in many machine learning algorithms for causal inference, domain adaptation, class imbalance, and off-policy reinforcement learning.
There are existing efforts that model the training dynamics of GANs in the parameter space but the analysis cannot directly motivate practically effective stabilizing methods.