Learning-To-Rank
178 papers with code • 0 benchmarks • 9 datasets
Learning to rank is the application of machine learning to build ranking models. Some common use cases for ranking models are information retrieval (e.g., web search) and news feeds application (think Twitter, Facebook, Instagram).
Benchmarks
These leaderboards are used to track progress in Learning-To-Rank
Libraries
Use these libraries to find Learning-To-Rank models and implementationsDatasets
Latest papers
NoRefER: a Referenceless Quality Metric for Automatic Speech Recognition via Semi-Supervised Language Model Fine-Tuning with Contrastive Learning
The self-supervised NoRefER exploits the known quality relationships between hypotheses from multiple compression levels of an ASR for learning to rank intra-sample hypotheses by quality, which is essential for model comparisons.
A Reference-less Quality Metric for Automatic Speech Recognition via Contrastive-Learning of a Multi-Language Model with Self-Supervision
The common standard for quality evaluation of automatic speech recognition (ASR) systems is reference-based metrics such as the Word Error Rate (WER), computed using manual ground-truth transcriptions that are time-consuming and expensive to obtain.
Unified Off-Policy Learning to Rank: a Reinforcement Learning Perspective
Building upon this, we leverage offline RL techniques for off-policy LTR and propose the Click Model-Agnostic Unified Off-policy Learning to Rank (CUOLR) method, which could be easily applied to a wide range of click models.
RankFormer: Listwise Learning-to-Rank Using Listwide Labels
Web applications where users are presented with a limited selection of items have long employed ranking models to put the most relevant results first.
LibAUC: A Deep Learning Library for X-Risk Optimization
This paper introduces the award-winning deep learning (DL) library called LibAUC for implementing state-of-the-art algorithms towards optimizing a family of risk functions named X-risks.
RankCSE: Unsupervised Sentence Representations Learning via Learning to Rank
In this paper, we propose a novel approach, RankCSE, for unsupervised sentence representation learning, which incorporates ranking consistency and ranking distillation with contrastive learning into a unified framework.
SELFOOD: Self-Supervised Out-Of-Distribution Detection via Learning to Rank
To address the annotation bottleneck, we introduce SELFOOD, a self-supervised OOD detection method that requires only in-distribution samples as supervision.
THUIR@COLIEE 2023: Incorporating Structural Knowledge into Pre-trained Language Models for Legal Case Retrieval
Legal case retrieval techniques play an essential role in modern intelligent legal systems.
THUIR@COLIEE 2023: More Parameters and Legal Knowledge for Legal Case Entailment
This paper describes the approach of the THUIR team at the COLIEE 2023 Legal Case Entailment task.
On the Impact of Outlier Bias on User Clicks
We therefore propose an outlier-aware click model that accounts for both outlier and position bias, called outlier-aware position-based model ( OPBM).