Evaluating Interactive System Adaptation

LREC 2016  ·  Edouard Geoffrois ·

Enabling users of intelligent systems to enhance the system performance by providing feedback on their errors is an important need. However, the ability of systems to learn from user feedback is difficult to evaluate in an objective and comparative way. Indeed, the involvement of real users in the adaptation process is an impediment to objective evaluation. This issue can be solved by using an oracle approach, where users are simulated by oracles having access to the reference test data. Another difficulty is to find a meaningful metric despite the fact that system improvements depend on the feedback provided and on the system itself. A solution is to measure the minimal amount of information needed to correct all system errors. It can be shown that for any well defined non interactive task, the interactively supervised version of the task can be evaluated by combining such an oracle-based approach and a minimum supervision rate metric. This new evaluation protocol for adaptive systems is not only expected to drive progress for such systems, but also to pave the way for a specialisation of actors along the value chain of their technological development.

PDF Abstract LREC 2016 PDF LREC 2016 Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here