Search Results for author: Mete Sertkan

Found 9 papers, 7 papers with code

Ranger: A Toolkit for Effect-Size Based Multi-Task Evaluation

1 code implementation24 May 2023 Mete Sertkan, Sophia Althammer, Sebastian Hofstätter

In this paper, we introduce Ranger - a toolkit to facilitate the easy use of effect-size-based meta-analysis for multi-task evaluation in NLP and IR.

Introducing Neural Bag of Whole-Words with ColBERTer: Contextualized Late Interactions using Enhanced Reduction

no code implementations24 Mar 2022 Sebastian Hofstätter, Omar Khattab, Sophia Althammer, Mete Sertkan, Allan Hanbury

Recent progress in neural information retrieval has demonstrated large gains in effectiveness, while often sacrificing the efficiency and interpretability of the neural model compared to classical approaches.

Information Retrieval Retrieval

PARM: A Paragraph Aggregation Retrieval Model for Dense Document-to-Document Retrieval

1 code implementation5 Jan 2022 Sophia Althammer, Sebastian Hofstätter, Mete Sertkan, Suzan Verberne, Allan Hanbury

However in the web domain we are in a setting with large amounts of training data and a query-to-passage or a query-to-document retrieval task.

Passage Retrieval Retrieval

Establishing Strong Baselines for TripClick Health Retrieval

2 code implementations2 Jan 2022 Sebastian Hofstätter, Sophia Althammer, Mete Sertkan, Allan Hanbury

We present strong Transformer-based re-ranking and dense retrieval baselines for the recently released TripClick health ad-hoc retrieval collection.

Re-Ranking Retrieval

A Time-Optimized Content Creation Workflow for Remote Teaching

1 code implementation11 Oct 2021 Sebastian Hofstätter, Sophia Althammer, Mete Sertkan, Allan Hanbury

We describe our workflow to create an engaging remote learning experience for a university course, while minimizing the post-production time of the educators.

Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation

1 code implementation6 Oct 2020 Sebastian Hofstätter, Sophia Althammer, Michael Schröder, Mete Sertkan, Allan Hanbury

Based on this finding, we propose a cross-architecture training procedure with a margin focused loss (Margin-MSE), that adapts knowledge distillation to the varying score output distributions of different BERT and non-BERT passage ranking architectures.

Knowledge Distillation Passage Ranking +3

Fine-Grained Relevance Annotations for Multi-Task Document Ranking and Question Answering

1 code implementation12 Aug 2020 Sebastian Hofstätter, Markus Zlabinger, Mete Sertkan, Michael Schröder, Allan Hanbury

We extend the ranked retrieval annotations of the Deep Learning track of TREC 2019 with passage and word level graded relevance annotations for all relevant documents.

Document Ranking Question Answering +1

DEXA: Supporting Non-Expert Annotators with Dynamic Examples from Experts

1 code implementation17 May 2020 Markus Zlabinger, Marta Sabou, Sebastian Hofstätter, Mete Sertkan, Allan Hanbury

of 0. 68 to experts in DEXA vs. 0. 40 in CONTROL); (ii) already three per majority voting aggregated annotations of the DEXA approach reach substantial agreements to experts of 0. 78/0. 75/0. 69 for P/I/O (in CONTROL 0. 73/0. 58/0. 46).

Avg Sentence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.