Stochastic Generalized Adversarial Label Learning

3 Jun 2019  ·  Chidubem Arachie, Bert Huang ·

The usage of machine learning models has grown substantially and is spreading into several application domains. A common need in using machine learning models is collecting the data required to train these models. In some cases, labeling a massive dataset can be a crippling bottleneck, so there is need to develop models that work when training labels for large amounts of data are not easily obtained. A possible solution is weak supervision, which uses noisy labels that are easily obtained from multiple sources. The challenge is how best to combine these noisy labels and train a model to perform well given a task. In this paper, we propose stochastic generalized adversarial label learning (Stoch-GALL), a framework for training machine learning models that perform well when noisy and possibly correlated labels are provided. Our framework allows users to provide different weak labels and multiple constraints on these labels. Our model then attempts to learn parameters for the data by solving a non-zero sum game optimization. The game is between an adversary that chooses labels for the data and a model that minimizes the error made by the adversarial labels. We test our method on three datasets by training convolutional neural network models that learn to classify image objects with limited access to training labels. Our approach is able to learn even in settings where the weak supervision confounds state-of-the-art weakly supervised learning methods. The results of our experiments demonstrate the applicability of this approach to general classification tasks.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here