Random Search replaces the exhaustive enumeration of all combinations by selecting them randomly. This can be simply applied to the discrete setting described above, but also generalizes to continuous and mixed spaces. It can outperform Grid search, especially when only a small number of hyperparameters affects the final performance of the machine learning algorithm. In this case, the optimization problem is said to have a low intrinsic dimensionality. Random Search is also embarrassingly parallel, and additionally allows the inclusion of prior knowledge by specifying the distribution from which to sample.
Extracted from Wikipedia
Source Paper
Image Source: BERGSTRA AND BENGIO
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Bayesian Optimization | 50 | 12.85% |
Reinforcement Learning | 29 | 7.46% |
Reinforcement Learning (RL) | 28 | 7.20% |
BIG-bench Machine Learning | 20 | 5.14% |
Deep Reinforcement Learning | 11 | 2.83% |
Image Classification | 9 | 2.31% |
regression | 8 | 2.06% |
Evolutionary Algorithms | 7 | 1.80% |
General Classification | 7 | 1.80% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |