BEIR (Benchmarking IR) is a heterogeneous benchmark containing different information retrieval (IR) tasks. Through BEIR, it is possible to systematically study the zero-shot generalization capabilities of multiple neural retrieval approaches.
202 PAPERS • 19 BENCHMARKS
This paper is a condensed report on the second year of the Touché shared task on argument retrieval held at CLEF 2021. With the goal to provide a collaborative platform for researchers, we organized two tasks: (1) supporting individuals in finding arguments on controversial topics of social importance and (2) supporting individuals with arguments in personal everyday comparison situations.
2 PAPERS • 1 BENCHMARK
With the goal of reasoning on the financial textual data, we present a novel dataset for annotating arguments, their components, and relations in the transcripts of earnings conference calls (ECCs).
0 PAPER • NO BENCHMARKS YET