Model Selection
546 papers with code • 0 benchmarks • 1 datasets
Given a set of candidate models, the goal of Model Selection is to select the model that best approximates the observed data and captures its underlying regularities. Model Selection criteria are defined such that they strike a balance between the goodness of fit, and the generalizability or complexity of the models.
Benchmarks
These leaderboards are used to track progress in Model Selection
Most implemented papers
BERTScore: Evaluating Text Generation with BERT
We propose BERTScore, an automatic evaluation metric for text generation.
In Search of Lost Domain Generalization
As a first step, we realize that model selection is non-trivial for domain generalization tasks.
Population Based Training of Neural Networks
Neural networks dominate the modern machine learning landscape, but their training and success still suffer from sensitivity to empirical choices of hyperparameters such as model architecture, loss function, and optimisation algorithm.
Data Splits and Metrics for Method Benchmarking on Surgical Action Triplet Datasets
We also develop a metrics library, ivtmetrics, for model evaluation on surgical triplets.
Deep Domain Confusion: Maximizing for Domain Invariance
Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark.
metric-learn: Metric Learning Algorithms in Python
metric-learn is an open source Python package implementing supervised and weakly-supervised distance metric learning algorithms.
Conditional Density Estimation Tools in Python and R with Applications to Photometric Redshifts and Likelihood-Free Cosmological Inference
We provide sample code in $\texttt{Python}$ and $\texttt{R}$ as well as examples of applications to photometric redshift estimation and likelihood-free cosmological inference via CDE.
Laplace Redux -- Effortless Bayesian Deep Learning
Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.
Neural Vector Spaces for Unsupervised Information Retrieval
We propose the Neural Vector Space Model (NVSM), a method that learns representations of documents in an unsupervised manner for news article retrieval.
Learning Sparse Neural Networks through $L_0$ Regularization
We further propose the \emph{hard concrete} distribution for the gates, which is obtained by "stretching" a binary concrete distribution and then transforming its samples with a hard-sigmoid.