L2 Regularization
21 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in L2 Regularization
Most implemented papers
Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines
Continual learning has received a great deal of attention recently with several approaches being proposed.
Convolutional Neural Networks for Facial Expression Recognition
We have developed convolutional neural networks (CNN) for a facial expression recognition task.
On Regularization Parameter Estimation under Covariate Shift
This paper identifies a problem with the usual procedure for L2-regularization parameter estimation in a domain adaptation setting.
Neurogenesis-Inspired Dictionary Learning: Online Model Adaption in a Changing World
In this paper, we focus on online representation learning in non-stationary environments which may require continuous adaptation of model architecture.
Collaboratively Weighting Deep and Classic Representation via L2 Regularization for Image Classification
We propose a deep collaborative weight-based classification (DeepCWC) method to resolve this problem, by providing a novel option to fully take advantage of deep features in classic machine learning.
Quantifying Generalization in Reinforcement Learning
In this paper, we investigate the problem of overfitting in deep reinforcement learning.
What is the Effect of Importance Weighting in Deep Learning?
Importance-weighted risk minimization is a key ingredient in many machine learning algorithms for causal inference, domain adaptation, class imbalance, and off-policy reinforcement learning.
Learning a smooth kernel regularizer for convolutional neural networks
We propose a smooth kernel regularizer that encourages spatial correlations in convolution kernel weights.
Understanding and Stabilizing GANs' Training Dynamics with Control Theory
There are existing efforts that model the training dynamics of GANs in the parameter space but the analysis cannot directly motivate practically effective stabilizing methods.
Data and Model Dependencies of Membership Inference Attack
Our results reveal the relationship between MIA accuracy and properties of the dataset and training model in use.