# Model Selection

448 papers with code • 0 benchmarks • 1 datasets

Given a set of candidate models, the goal of **Model Selection** is to select the model that best approximates the observed data and captures its underlying regularities. Model Selection criteria are defined such that they strike a balance between the goodness of fit, and the generalizability or complexity of the models.

## Benchmarks

These leaderboards are used to track progress in Model Selection
## Libraries

Use these libraries to find Model Selection models and implementations## Most implemented papers

# BERTScore: Evaluating Text Generation with BERT

We propose BERTScore, an automatic evaluation metric for text generation.

# Population Based Training of Neural Networks

Neural networks dominate the modern machine learning landscape, but their training and success still suffer from sensitivity to empirical choices of hyperparameters such as model architecture, loss function, and optimisation algorithm.

# In Search of Lost Domain Generalization

As a first step, we realize that model selection is non-trivial for domain generalization tasks.

# Data Splits and Metrics for Method Benchmarking on Surgical Action Triplet Datasets

We also develop a metrics library, ivtmetrics, for model evaluation on surgical triplets.

# Deep Domain Confusion: Maximizing for Domain Invariance

Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark.

# Conditional Density Estimation Tools in Python and R with Applications to Photometric Redshifts and Likelihood-Free Cosmological Inference

We provide sample code in $\texttt{Python}$ and $\texttt{R}$ as well as examples of applications to photometric redshift estimation and likelihood-free cosmological inference via CDE.

# Neural Vector Spaces for Unsupervised Information Retrieval

We propose the Neural Vector Space Model (NVSM), a method that learns representations of documents in an unsupervised manner for news article retrieval.

# Learning Sparse Neural Networks through $L_0$ Regularization

We further propose the \emph{hard concrete} distribution for the gates, which is obtained by "stretching" a binary concrete distribution and then transforming its samples with a hard-sigmoid.

# Tune: A Research Platform for Distributed Model Selection and Training

We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation.

# Variational Bayesian Monte Carlo

We introduce here a novel sample-efficient inference framework, Variational Bayesian Monte Carlo (VBMC).