Active Learning
754 papers with code • 1 benchmarks • 15 datasets
Active Learning is a paradigm in supervised machine learning which uses fewer training examples to achieve better optimization by iteratively training a predictor, and using the predictor in each iteration to choose the training examples which will increase its chances of finding better configurations and at the same time improving the accuracy of the prediction model
Source: Polystore++: Accelerated Polystore System for Heterogeneous Workloads
Libraries
Use these libraries to find Active Learning models and implementationsDatasets
Latest papers with no code
AI-Guided Feature Segmentation Techniques to Model Features from Single Crystal Diamond Growth
This paper compares various traditional and machine learning-driven approaches for feature extraction in the diamond growth domain, proposing a novel deep learning-driven semantic segmentation approach to isolate and classify accurate pixel masks of geometric features like diamond, pocket holder, and background, along with their derivative features based on shape and size.
ProtoAL: Interpretable Deep Active Learning with prototypes for medical imaging
This approach aims to address both challenges by focusing on the medical imaging context and utilizing an inherently interpretable model based on prototypes.
Focused Active Learning for Histopathological Image Classification
The lack of precise uncertainty estimations leads to the acquisition of images with a low informative value.
Active Causal Learning for Decoding Chemical Complexities with Targeted Interventions
While our implementation focused on the QM9 quantum-chemical dataset for a specific design task-finding molecules with a large dipole moment-our active causal learning approach, driven by intelligent sampling and interventions, holds potential for broader applications in molecular, materials design and discovery.
Active learning for efficient annotation in precision agriculture: a use-case on crop-weed semantic segmentation
Our results indicated that active learning, especially PowerBALD, yields a higher performance than Random sampling on both datasets.
Uncertainty-aware Active Learning of NeRF-based Object Models for Robot Manipulators using Visual and Re-orientation Actions
This paper presents an approach that enables a robot to rapidly learn the complete 3D model of a given object for manipulation in unfamiliar orientations.
Hallucination Diversity-Aware Active Learning for Text Summarization
Large Language Models (LLMs) have shown propensity to generate hallucinated outputs, i. e., texts that are factually incorrect or unsupported.
Adaptive Combinatorial Maximization: Beyond Approximate Greedy Policies
Our approximation guarantees simultaneously support the maximal gain ratio as well as near-submodular utility functions, and include both maximization under a cardinality constraint and a minimum cost coverage guarantee.
Active Exploration in Bayesian Model-based Reinforcement Learning for Robot Manipulation
Model-based RL, by building a dynamic model of the robot, enables data reuse and transfer learning between tasks with the same robot and similar environment.
LLMs in the Loop: Leveraging Large Language Model Annotations for Active Learning in Low-Resource Languages
To address this gap, we propose leveraging the potential of LLMs in the active learning loop for data annotation.