Search Results for author: Franco Maria Nardini

Found 14 papers, 6 papers with code

Bridging Dense and Sparse Maximum Inner Product Search

no code implementations16 Sep 2023 Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty

Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval.

Dimensionality Reduction Information Retrieval +1

Efficient and Effective Tree-based and Neural Learning to Rank

no code implementations15 May 2023 Sebastian Bruch, Claudio Lucchese, Franco Maria Nardini

We believe that by understanding the fundamentals underpinning these algorithmic and data structure solutions for containing the contentious relationship between efficiency and effectiveness, one can better identify future directions and more efficiently determine the merits of ideas.

Information Retrieval Learning-To-Rank +1

An Approximate Algorithm for Maximum Inner Product Search over Streaming Sparse Vectors

no code implementations25 Jan 2023 Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty

To achieve optimal memory footprint and query latency, they rely on the near stationarity of documents and on laws governing natural languages.

Information Retrieval Retrieval

Caching Historical Embeddings in Conversational Search

no code implementations25 Nov 2022 Ophir Frieder, Ida Mele, Cristina Ioana Muntean, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto

Our achieved high cache hit rates significantly improve the responsiveness of conversational systems while likewise reducing the number of queries managed on the search back-end.

Conversational Search Document Embedding +1

ILMART: Interpretable Ranking with Constrained LambdaMART

1 code implementation1 Jun 2022 Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Alberto Veneri

Interpretable Learning to Rank (LtR) is an emerging field within the research area of explainable AI, aiming at developing intelligible and accurate predictive models.

Learning-To-Rank

An Optimal Algorithm for Finding Champions in Tournament Graphs

1 code implementation26 Nov 2021 Lorenzo Beretta, Franco Maria Nardini, Roberto Trani, Rossano Venturini

In this paper, we address the problem of finding a champion of the tournament, also known as Copeland winner, which is a player that wins the highest number of matches.

Conversational Search Information Retrieval +4

Learning Early Exit Strategies for Additive Ranking Ensembles

1 code implementation6 May 2021 Francesco Busolin, Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Salvatore Trani

Modern search engine ranking pipelines are commonly based on large machine-learned ensembles of regression trees.

Dynamic Hard Pruning of Neural Networks at the Edge of the Internet

no code implementations17 Nov 2020 Lorenzo Valerio, Franco Maria Nardini, Andrea Passarella, Raffaele Perego

Results show that DynHP compresses a NN up to $10$ times without significant performance drops (up to $3. 5\%$ additional error w. r. t.

Edge-computing

Query-level Early Exit for Additive Learning-to-Rank Ensembles

no code implementations30 Apr 2020 Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Salvatore Trani

In this paper, we investigate the novel problem of \textit{query-level early exiting}, aimed at deciding the profitability of early stopping the traversal of the ranking ensemble for all the candidate documents to be scored for a query, by simply returning a ranking based on the additive scores computed by a limited portion of the ensemble.

Learning-To-Rank

Expansion via Prediction of Importance with Contextualization

1 code implementation29 Apr 2020 Sean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, Ophir Frieder

We also observe that the performance is additive with the current leading first-stage retrieval methods, further narrowing the gap between inexpensive and cost-prohibitive passage ranking approaches.

Language Modelling Passage Ranking +2

Training Curricula for Open Domain Answer Re-Ranking

1 code implementation29 Apr 2020 Sean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, Ophir Frieder

We show that the proposed heuristics can be used to build a training curriculum that down-weights difficult samples early in the training process.

Re-Ranking

Cannot find the paper you are looking for? You can Submit a new open access paper.