Active Learning

754 papers with code • 1 benchmarks • 15 datasets

Active Learning is a paradigm in supervised machine learning which uses fewer training examples to achieve better optimization by iteratively training a predictor, and using the predictor in each iteration to choose the training examples which will increase its chances of finding better configurations and at the same time improving the accuracy of the prediction model

Source: Polystore++: Accelerated Polystore System for Heterogeneous Workloads

Libraries

Use these libraries to find Active Learning models and implementations

Latest papers with no code

AI-Guided Feature Segmentation Techniques to Model Features from Single Crystal Diamond Growth

no code yet • 10 Apr 2024

This paper compares various traditional and machine learning-driven approaches for feature extraction in the diamond growth domain, proposing a novel deep learning-driven semantic segmentation approach to isolate and classify accurate pixel masks of geometric features like diamond, pocket holder, and background, along with their derivative features based on shape and size.

ProtoAL: Interpretable Deep Active Learning with prototypes for medical imaging

no code yet • 6 Apr 2024

This approach aims to address both challenges by focusing on the medical imaging context and utilizing an inherently interpretable model based on prototypes.

Focused Active Learning for Histopathological Image Classification

no code yet • 6 Apr 2024

The lack of precise uncertainty estimations leads to the acquisition of images with a low informative value.

Active Causal Learning for Decoding Chemical Complexities with Targeted Interventions

no code yet • 5 Apr 2024

While our implementation focused on the QM9 quantum-chemical dataset for a specific design task-finding molecules with a large dipole moment-our active causal learning approach, driven by intelligent sampling and interventions, holds potential for broader applications in molecular, materials design and discovery.

Active learning for efficient annotation in precision agriculture: a use-case on crop-weed semantic segmentation

no code yet • 3 Apr 2024

Our results indicated that active learning, especially PowerBALD, yields a higher performance than Random sampling on both datasets.

Uncertainty-aware Active Learning of NeRF-based Object Models for Robot Manipulators using Visual and Re-orientation Actions

no code yet • 2 Apr 2024

This paper presents an approach that enables a robot to rapidly learn the complete 3D model of a given object for manipulation in unfamiliar orientations.

Hallucination Diversity-Aware Active Learning for Text Summarization

no code yet • 2 Apr 2024

Large Language Models (LLMs) have shown propensity to generate hallucinated outputs, i. e., texts that are factually incorrect or unsupported.

Adaptive Combinatorial Maximization: Beyond Approximate Greedy Policies

no code yet • 2 Apr 2024

Our approximation guarantees simultaneously support the maximal gain ratio as well as near-submodular utility functions, and include both maximization under a cardinality constraint and a minimum cost coverage guarantee.

Active Exploration in Bayesian Model-based Reinforcement Learning for Robot Manipulation

no code yet • 2 Apr 2024

Model-based RL, by building a dynamic model of the robot, enables data reuse and transfer learning between tasks with the same robot and similar environment.

LLMs in the Loop: Leveraging Large Language Model Annotations for Active Learning in Low-Resource Languages

no code yet • 2 Apr 2024

To address this gap, we propose leveraging the potential of LLMs in the active learning loop for data annotation.