Our work is the first to evaluate IoU with humans and makes it clear that relying on IoU scores alone to evaluate localization errors might not be sufficient.
We apply this notion to a re-ranking of topic-relevant recommended lists, to form the basis of a novel viewpoint diversification method.
Despite the growing popularity of this approach, there has not yet been a comprehensive literature review to provide guidance to researchers considering using crowdsourcing methodologies in their own medical imaging analysis.
However, in many domains, there is ambiguity in the data, as well as a multitude of perspectives of the information examples.
Human-Computer Interaction Social and Information Networks
This paper describes two sets of crowdsourcing experiments on temporal information annotation conducted on two languages, i. e., English and Italian.