Active Learning
750 papers with code • 1 benchmarks • 15 datasets
Active Learning is a paradigm in supervised machine learning which uses fewer training examples to achieve better optimization by iteratively training a predictor, and using the predictor in each iteration to choose the training examples which will increase its chances of finding better configurations and at the same time improving the accuracy of the prediction model
Source: Polystore++: Accelerated Polystore System for Heterogeneous Workloads
Libraries
Use these libraries to find Active Learning models and implementationsDatasets
Latest papers
Active Statistical Inference
This means that for the same number of collected samples, active inference enables smaller confidence intervals and more powerful p-values.
STAR: Constraint LoRA with Dynamic Active Learning for Data-Efficient Fine-Tuning of Large Language Models
For poor model calibration, we incorporate the regularization method during LoRA training to keep the model from being over-confident, and the Monte-Carlo dropout mechanism is employed to enhance the uncertainty estimation.
Accelerating materials discovery for polymer solar cells: Data-driven insights enabled by natural language processing
We present a natural language processing pipeline that was used to extract polymer solar cell property data from the literature and simulate various active learning strategies.
Efficiently Computable Safety Bounds for Gaussian Processes in Active Learning
Active learning of physical systems must commonly respect practical safety constraints, which restricts the exploration of the design space.
DistALANER: Distantly Supervised Active Learning Augmented Named Entity Recognition in the Open Source Software Ecosystem
With the AI revolution in place, the trend for building automated systems to support professionals in different domains such as the open source software systems, healthcare systems, banking systems, transportation systems and many others have become increasingly prominent.
Global Safe Sequential Learning via Efficient Knowledge Transfer
As transferable source knowledge is often available in safety critical experiments, we propose to consider transfer safe sequential learning to accelerate the learning of safety.
ActiveRAG: Revealing the Treasures of Knowledge via Active Learning
Retrieval Augmented Generation (RAG) has introduced a new paradigm for Large Language Models (LLMs), aiding in the resolution of knowledge-intensive tasks.
STENCIL: Submodular Mutual Information Based Weak Supervision for Cold-Start Active Learning
As supervised fine-tuning of pre-trained models within NLP applications increases in popularity, larger corpora of annotated data are required, especially with increasing parameter counts in large language models.
Towards accelerating physical discovery via non-interactive and interactive multi-fidelity Bayesian Optimization: Current challenges and future opportunities
Both computational and experimental material discovery bring forth the challenge of exploring multidimensional and often non-differentiable parameter spaces, such as phase diagrams of Hamiltonians with multiple interactions, composition spaces of combinatorial libraries, processing spaces, and molecular embedding spaces.
Key Patch Proposer: Key Patches Contain Rich Information
In this paper, we introduce a novel algorithm named Key Patch Proposer (KPP) designed to select key patches in an image without additional training.