Browse > Methodology > Active Learning

Active Learning

48 papers with code · Methodology

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

libact: Pool-based Active Learning in Python

1 Oct 2017ntucllab/libact

libact is a Python package designed to make active learning easier for general users. The package not only implements several popular active learning strategies, but also features the active-learning-by-learning meta-algorithm that assists the users to automatically select the best strategy on the fly.


modAL: A modular active learning framework for Python

2 May 2018cosmic-cortex/modAL

modAL is a modular active learning framework for Python, aimed to make active learning research and practice simpler. Its distinguishing features are (i) clear and modular object oriented design (ii) full compatibility with scikit-learn models and workflows.


Active Anomaly Detection via Ensembles: Insights, Algorithms, and Interpretability

23 Jan 2019shubhomoydas/ad_examples

In this paper, we study the problem of active learning to automatically tune ensemble of anomaly detectors to maximize the number of true anomalies discovered. Second, we present several algorithms for active learning with tree-based AD ensembles.


Active Anomaly Detection via Ensembles

17 Sep 2018shubhomoydas/ad_examples

In critical applications of anomaly detection including computer security and fraud prevention, the anomaly detector must be configurable by the analyst to minimize the effort on false positives. First, we present an important insight into how anomaly detector ensembles are naturally suited for active learning.


Few-Shot Learning with Graph Neural Networks

10 Nov 2017vgsatorras/few-shot-gnn

We propose to study the problem of few-shot learning with the prism of inference on a partially observed graphical model, constructed from a collection of input images whose label can be either observed or not. By assimilating generic message-passing inference algorithms with their neural-network counterparts, we define a graph neural network architecture that generalizes several of the recently proposed few-shot learning models.


ALiPy: Active Learning in Python

12 Jan 2019NUAA-AL/ALiPy

Supervised machine learning methods usually require a large set of labeled examples for model training. However, in many real applications, there are plentiful unlabeled data but limited labeled data; and the acquisition of labels is costly.


Less is more: sampling chemical space with active learning

28 Jan 2018isayev/ASE_ANI

In this work, we present a fully automated approach for the generation of datasets with the intent of training universal ML potentials. Finally, we show that our proposed AL technique develops a universal ANI potential (ANI-1x) that provides accurate energy and force predictions on the entire COMP6 benchmark.


Building a comprehensive syntactic and semantic corpus of Chinese clinical texts

7 Nov 2016WILAB-HIT/Resources

Objective: To build a comprehensive corpus covering syntactic and semantic annotations of Chinese clinical texts with corresponding annotation guidelines and methods as well as to develop tools trained on the annotated corpus, which supplies baselines for research on Chinese texts in the clinical domain. Conclusions: In this study, several annotation guidelines and an annotation method for Chinese clinical texts were proposed, and a comprehensive corpus with its NLP modules were constructed, providing a foundation for further study of applying NLP techniques to Chinese texts in the clinical domain.


The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition

20 Nov 2015google/goldfinch

Current approaches for fine-grained recognition do the following: First, recruit experts to annotate a dataset of images, optionally also collecting more structured data in the form of part annotations and bounding boxes. We demonstrate its efficacy on four fine-grained datasets, greatly exceeding existing state of the art without the manual collection of even a single label, and furthermore show first results at scaling to more than 10,000 fine-grained categories.


Reliable Uncertainty Estimates in Deep Neural Networks using Noise Contrastive Priors

ICLR 2019 brain-research/ncp

NCPs are compatible with any model that can output uncertainty estimates, are easy to scale, and yield reliable uncertainty estimates throughout training. Empirically, we show that NCPs prevent overfitting outside of the training distribution and result in uncertainty estimates that are useful for active learning.