Argument Retrieval
5 papers with code • 2 benchmarks • 1 datasets
Most implemented papers
Evaluating Fairness in Argument Retrieval
In this work, we analyze a range of non-stochastic fairness-aware ranking and diversity metrics to evaluate the extent to which argument stances are fairly exposed in argument retrieval systems.
Fine-Grained Argument Unit Recognition and Classification
In this work, we argue that the task should be performed on a more fine-grained level of sequence labeling.
Diversity Aware Relevance Learning for Argument Search
In this work, we focus on the problem of retrieving relevant arguments for a query claim covering diverse aspects.
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval.
SGPT: GPT Sentence Embeddings for Semantic Search
A 5. 8 billion parameter SGPT-BE outperforms the best available sentence embeddings by 6% setting a new state-of-the-art on BEIR.