Ensemble Learning
247 papers with code • 1 benchmarks • 3 datasets
Libraries
Use these libraries to find Ensemble Learning models and implementationsDatasets
Most implemented papers
Ensemble Knowledge Distillation for Learning Improved and Efficient Networks
Ensemble models comprising of deep Convolutional Neural Networks (CNN) have shown significant improvements in model generalization but at the cost of large computation and memory requirements.
Ensemble learning in CNN augmented with fully connected subnetworks
The output of the overall model is determined by majority vote of the base CNN and the FCSNs.
Sample Efficient Ensemble Learning with Catalyst.RL
We present Catalyst. RL, an open-source PyTorch framework for reproducible and sample efficient reinforcement learning (RL) research.
Multiple Expert Brainstorming for Domain Adaptive Person Re-identification
Often the best performing deep neural models are ensembles of multiple base-level networks, nevertheless, ensemble learning with respect to domain adaptive person re-ID remains unexplored.
Conformal prediction interval for dynamic time-series
We develop a method to construct distribution-free prediction intervals for dynamic time-series, called \Verb|EnbPI| that wraps around any bootstrap ensemble estimator to construct sequential prediction intervals.
Using Transformer based Ensemble Learning to classify Scientific Articles
The first one is a RoBERTa [10] based model built over these abstracts.
Domain Generalization: A Survey
Generalization to out-of-distribution (OOD) data is a capability natural to humans yet challenging for machines to reproduce.
Multi-Disease Detection in Retinal Imaging based on Ensembling Heterogeneous Deep Learning Models
Preventable or undiagnosed visual impairment and blindness affect billion of people worldwide.
Supervised Anomaly Detection via Conditional Generative Adversarial Network and Ensemble Active Learning
In addition to using the conditional GAN to generate class balanced supplementary training data, an innovative ensemble learning loss function ensuring each discriminator makes up for the deficiencies of the others is designed to overcome the class imbalanced problem, and an active learning algorithm is introduced to significantly reduce the cost of labeling real-world data.
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
Our framework, FreeTickets, is defined as the ensemble of these relatively cheap sparse subnetworks.