Learning-To-Rank
174 papers with code • 0 benchmarks • 9 datasets
Learning to rank is the application of machine learning to build ranking models. Some common use cases for ranking models are information retrieval (e.g., web search) and news feeds application (think Twitter, Facebook, Instagram).
Benchmarks
These leaderboards are used to track progress in Learning-To-Rank
Libraries
Use these libraries to find Learning-To-Rank models and implementationsDatasets
Latest papers
RankingSHAP -- Listwise Feature Attribution Explanations for Ranking Models
We evaluate RankingSHAP for commonly used learning-to-rank datasets to showcase the more nuanced use of an attribution method while highlighting the limitations of selection-based explanations.
Metasql: A Generate-then-Rank Framework for Natural Language to SQL Translation
While these translation models have greatly improved the overall translation accuracy, surpassing 70% on NLIDB benchmarks, the use of auto-regressive decoding to generate single SQL queries may result in sub-optimal outputs, potentially leading to erroneous translations.
Explain then Rank: Scale Calibration of Neural Rankers Using Natural Language Explanations from Large Language Models
The process of scale calibration in ranking systems involves adjusting the outputs of rankers to correspond with significant qualities like click-through rates or relevance, crucial for mirroring real-world value and thereby boosting the system's effectiveness and reliability.
List-aware Reranking-Truncation Joint Model for Search and Retrieval-augmented Generation
First, it is hard to share the contextual information of the ranking list between the two tasks.
How to Forget Clients in Federated Online Learning to Rank?
In a FOLTR system, a ranker is learned by aggregating local updates to the global ranking model.
Learning-To-Rank Approach for Identifying Everyday Objects Using a Physical-World Search Engine
Therefore, we focus on the task of retrieving target objects from open-vocabulary user instructions in a human-in-the-loop setting, which we define as the learning-to-rank physical objects (LTRPO) task.
SARDINE: A Simulator for Automated Recommendation in Dynamic and Interactive Environments
Simulators can provide valuable insights for researchers and practitioners who wish to improve recommender systems, because they allow one to easily tweak the experimental setup in which recommender systems operate, and as a result lower the cost of identifying general trends and uncovering novel findings about the candidate methods.
GLEN: Generative Retrieval via Lexical Index Learning
For training, GLEN effectively exploits a dynamic lexical identifier using a two-phase index learning strategy, enabling it to learn meaningful lexical identifiers and relevance signals between queries and documents.
RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Active Data Manipulation
Federated learning (FL) has recently emerged as a privacy-preserving approach for machine learning in domains that rely on user interactions, particularly recommender systems (RS) and online learning to rank (OLTR).
Learning to Rank Context for Named Entity Recognition Using a Synthetic Dataset
Using this dataset, we train a neural context retriever based on a BERT model that is able to find relevant context for NER.