Argument Retrieval
9 papers with code • 2 benchmarks • 3 datasets
Most implemented papers
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval.
Evaluating Fairness in Argument Retrieval
In this work, we analyze a range of non-stochastic fairness-aware ranking and diversity metrics to evaluate the extent to which argument stances are fairly exposed in argument retrieval systems.
Fine-Grained Argument Unit Recognition and Classification
In this work, we argue that the task should be performed on a more fine-grained level of sequence labeling.
Diversity Aware Relevance Learning for Argument Search
In this work, we focus on the problem of retrieving relevant arguments for a query claim covering diverse aspects.
SGPT: GPT Sentence Embeddings for Semantic Search
To this end, we propose SGPT to use decoders for sentence embeddings and semantic search via prompting or fine-tuning.
No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval
This has made distilled and dense models, due to latency constraints, the go-to choice for deployment in real-world retrieval applications.
Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking
Pairing a lexical retriever with a neural re-ranking model has set state-of-the-art performance on large-scale information retrieval datasets.
Systematic Evaluation of Neural Retrieval Models on the Touché 2020 Argument Retrieval Subset of BEIR
Our black-box evaluation reveals an inherent bias of neural models towards retrieving short passages from the Touch\'e 2020 data, and we also find that quite a few of the neural models' results are unjudged in the Touch\'e 2020 data.
Overview of PerpectiveArg2024: The First Shared Task on Perspective Argument Retrieval
Argument retrieval is the task of finding relevant arguments for a given query.