Paper

Decision Rule Elicitation for Domain Adaptation

Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels for data points from experts or to provide feedback on how close the predicted results are to the target. This simplifies away all the details of the decision-making process of the expert. In this work, we allow the experts to additionally produce decision rules describing their decision-making; the rules are expected to be imperfect but to give additional information. In particular, the rules can extend to new distributions, and hence enable significantly improving performance for cases where the training and testing distributions differ, such as in domain adaptation. We apply the proposed method to lifelong learning and domain adaptation problems and discuss applications in other branches of AI, such as knowledge acquisition problems in expert systems. In simulated and real-user studies, we show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.

Results in Papers With Code
(↓ scroll down to see all results)