Active Learning
760 papers with code • 1 benchmarks • 15 datasets
Active Learning is a paradigm in supervised machine learning which uses fewer training examples to achieve better optimization by iteratively training a predictor, and using the predictor in each iteration to choose the training examples which will increase its chances of finding better configurations and at the same time improving the accuracy of the prediction model
Source: Polystore++: Accelerated Polystore System for Heterogeneous Workloads
Libraries
Use these libraries to find Active Learning models and implementationsDatasets
Latest papers
Global Safe Sequential Learning via Efficient Knowledge Transfer
As transferable source knowledge is often available in safety critical experiments, we propose to consider transfer safe sequential learning to accelerate the learning of safety.
ActiveRAG: Revealing the Treasures of Knowledge via Active Learning
Retrieval Augmented Generation (RAG) has introduced a new paradigm for Large Language Models (LLMs), aiding in the resolution of knowledge-intensive tasks.
STENCIL: Submodular Mutual Information Based Weak Supervision for Cold-Start Active Learning
As supervised fine-tuning of pre-trained models within NLP applications increases in popularity, larger corpora of annotated data are required, especially with increasing parameter counts in large language models.
Towards accelerating physical discovery via non-interactive and interactive multi-fidelity Bayesian Optimization: Current challenges and future opportunities
Both computational and experimental material discovery bring forth the challenge of exploring multidimensional and often non-differentiable parameter spaces, such as phase diagrams of Hamiltonians with multiple interactions, composition spaces of combinatorial libraries, processing spaces, and molecular embedding spaces.
Key Patch Proposer: Key Patches Contain Rich Information
In this paper, we introduce a novel algorithm named Key Patch Proposer (KPP) designed to select key patches in an image without additional training.
Decomposition for Enhancing Attention: Improving LLM-based Text-to-SQL through Workflow Paradigm
To improve the contextual learning capabilities of LLMs in text-to-SQL, a workflow paradigm method is proposed, aiming to enhance the attention and problem-solving scope of LLMs through decomposition.
Video Annotator: A framework for efficiently building video classifiers using vision-language models and active learning
High-quality and consistent annotations are fundamental to the successful development of robust machine learning models.
ActiveAnno3D - An Active Learning Framework for Multi-Modal 3D Object Detection
We propose ActiveAnno3D, an active learning framework to select data samples for labeling that are of maximum informativeness for training.
Foundation Model Makes Clustering A Better Initialization For Cold-Start Active Learning
In this work, we propose to integrate foundation models with clustering methods to select samples for cold-start active learning initialization.
Composite Active Learning: Towards Multi-Domain Active Learning with Theoretical Guarantees
In this paper, we propose the first general method, dubbed composite active learning (CAL), for multi-domain AL. Our approach explicitly considers the domain-level and instance-level information in the problem; CAL first assigns domain-level budgets according to domain-level importance, which is estimated by optimizing an upper error bound that we develop; with the domain-level budgets, CAL then leverages a certain instance-level query strategy to select samples to label from each domain.