Hyperparameter Optimization
279 papers with code • 1 benchmarks • 3 datasets
Hyperparameter Optimization is the problem of choosing a set of optimal hyperparameters for a learning algorithm. Whether the algorithm is suitable for the data directly depends on hyperparameters, which directly influence overfitting or underfitting. Each model requires different assumptions, weights or training speeds for different types of data under the conditions of a given loss function.
Libraries
Use these libraries to find Hyperparameter Optimization models and implementationsLatest papers
Teaching Specific Scientific Knowledge into Large Language Models through Additional Training
Through additional training, we explore embedding specialized scientific knowledge into the Llama 2 Large Language Model (LLM).
TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications
We introduce TabRepo, a new dataset of tabular model evaluations and predictions.
Hodge-Compositional Edge Gaussian Processes
We propose principled Gaussian processes (GPs) for modeling functions defined over the edge set of a simplicial 2-complex, a structure similar to a graph in which edges may form triangular faces.
Large-Scale Gaussian Processes via Alternating Projection
Training and inference in Gaussian processes (GPs) require solving linear systems with $n\times n$ kernel matrices.
Hyperparameter Optimization for Multi-Objective Reinforcement Learning
Hence, prior research has explored hyperparameter optimization in RL to address this concern.
Machine Learning in the Quantum Age: Quantum vs. Classical Support Vector Machines
This work endeavors to juxtapose the efficacy of machine learning algorithms within classical and quantum computational paradigms.
Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization
Evaluating the adversarial robustness of machine learning models using gradient-based attacks is challenging.
Auto-FP: An Experimental Study of Automated Feature Preprocessing for Tabular Data
This observation enables us to extend a variety of HPO and NAS algorithms to solve the Auto-FP problem.
Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning
In an experimental study targeting the environmental impact of ML, we demonstrate that our approach leads to substantially better Pareto fronts compared to optimizing based on a wrong indicator pre-selected by the user, and performs comparable in the case of an advanced user knowing which indicator to pick.
Where Did the Gap Go? Reassessing the Long-Range Graph Benchmark
The recent Long-Range Graph Benchmark (LRGB, Dwivedi et al. 2022) introduced a set of graph learning tasks strongly dependent on long-range interaction between vertices.