Decision Rule Elicitation for Domain Adaptation

23 Feb 2021  ·  Alexander Nikitin, Samuel Kaski ·

Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels for data points from experts or to provide feedback on how close the predicted results are to the target. This simplifies away all the details of the decision-making process of the expert. In this work, we allow the experts to additionally produce decision rules describing their decision-making; the rules are expected to be imperfect but to give additional information. In particular, the rules can extend to new distributions, and hence enable significantly improving performance for cases where the training and testing distributions differ, such as in domain adaptation. We apply the proposed method to lifelong learning and domain adaptation problems and discuss applications in other branches of AI, such as knowledge acquisition problems in expert systems. In simulated and real-user studies, we show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here