Search Results for author: Franco Maria Nardini

Found 26 papers, 14 papers with code

Effective Inference-Free Retrieval for Learned Sparse Representations

no code implementations30 Apr 2025 Franco Maria Nardini, Thong Nguyen, Cosimo Rulli, Rossano Venturini, Andrew Yates

In this paper, we conduct an extended evaluation of regularization approaches for LSR where we discuss their effectiveness, efficiency, and out-of-domain generalization capabilities.

Domain Generalization Retrieval

Efficient Conversational Search via Topical Locality in Dense Retrieval

1 code implementation30 Apr 2025 Cristina Ioana Muntean, Franco Maria Nardini, Raffaele Perego, Guido Rocchietti, Cosimo Rulli

Pre-trained language models have been widely exploited to learn dense representations of documents and queries for information retrieval.

Conversational Search Information Retrieval +1

Investigating the Scalability of Approximate Sparse Retrieval Algorithms to Massive Datasets

1 code implementation20 Jan 2025 Sebastian Bruch, Franco Maria Nardini, Cosimo Rulli, Rossano Venturini, Leonardo Venuta

Learned sparse text embeddings have gained popularity due to their effectiveness in top-k retrieval and inherent interpretability.

Retrieval

kANNolo: Sweet and Smooth Approximate k-Nearest Neighbors Search

1 code implementation10 Jan 2025 Leonardo Delfino, Domenico Erriquez, Silvio Martinico, Franco Maria Nardini, Cosimo Rulli, Rossano Venturini

kANNolo is the first ANN library that supports dense and sparse vector representations made available on top of different similarity measures, e. g., euclidean distance and inner product.

Information Retrieval Quantization +1

Power- and Fragmentation-aware Online Scheduling for GPU Datacenters

1 code implementation23 Dec 2024 Francesco Lettich, Emanuele Carlini, Franco Maria Nardini, Raffaele Perego, Salvatore Trani

A recent scheduling policy, Fragmentation Gradient Descent (FGD), leverages a fragmentation metric to address this issue.

Scheduling

Rewriting Conversational Utterances with Instructed Large Language Models

no code implementations10 Oct 2024 Elnara Galimzhanova, Cristina Ioana Muntean, Franco Maria Nardini, Raffaele Perego, Guido Rocchietti

Many recent studies have shown the ability of large language models (LLMs) to achieve state-of-the-art performance on many NLP tasks, such as question answering, text summarization, coding, and translation.

Conversational Search Question Answering +1

Early Exit Strategies for Approximate k-NN Search in Dense Retrieval

no code implementations9 Aug 2024 Francesco Busolin, Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Salvatore Trani

A popular technique for making A-kNN search efficient is based on a two-level index, where the embeddings of documents are clustered offline and, at query processing, a fixed number N of clusters closest to the query is visited exhaustively to compute the result set.

Retrieval

Pairing Clustered Inverted Indexes with kNN Graphs for Fast Approximate Retrieval over Learned Sparse Representations

no code implementations8 Aug 2024 Sebastian Bruch, Franco Maria Nardini, Cosimo Rulli, Rossano Venturini

At query time, each inverted list associated with a query term is traversed one block at a time in an arbitrary order, with the inner product between the query and summaries determining if a block must be evaluated.

Text Retrieval

Optimistic Query Routing in Clustering-based Approximate Maximum Inner Product Search

1 code implementation20 May 2024 Sebastian Bruch, Aditya Krishnan, Franco Maria Nardini

Clustering-based nearest neighbor search is an effective method in which points are partitioned into geometric shards to form an index, with only a few shards searched during query processing to find a set of top-$k$ vectors.

Clustering Sequential Decision Making

Efficient Inverted Indexes for Approximate Retrieval over Learned Sparse Representations

1 code implementation29 Apr 2024 Sebastian Bruch, Franco Maria Nardini, Cosimo Rulli, Rossano Venturini

In this work, we propose a novel organization of the inverted index that enables fast yet effective approximate retrieval over learned sparse embeddings.

Text Retrieval

A Learning-to-Rank Formulation of Clustering-Based Approximate Nearest Neighbor Search

1 code implementation17 Apr 2024 Thomas Vecchiato, Claudio Lucchese, Franco Maria Nardini, Sebastian Bruch

Its objective is to return a set of $k$ data points that are closest to a query point, with its accuracy measured by the proportion of exact nearest neighbors captured in the returned set.

Clustering Information Retrieval +1

Efficient Multi-Vector Dense Retrieval Using Bit Vectors

1 code implementation3 Apr 2024 Franco Maria Nardini, Cosimo Rulli, Rossano Venturini

This paper proposes ``Efficient Multi-Vector dense retrieval with Bit vectors'' (EMVB), a novel framework for efficient query processing in multi-vector dense retrieval.

Quantization Retrieval

Bridging Dense and Sparse Maximum Inner Product Search

no code implementations16 Sep 2023 Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty

Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval.

Dimensionality Reduction Information Retrieval +1

Efficient and Effective Tree-based and Neural Learning to Rank

no code implementations15 May 2023 Sebastian Bruch, Claudio Lucchese, Franco Maria Nardini

We believe that by understanding the fundamentals underpinning these algorithmic and data structure solutions for containing the contentious relationship between efficiency and effectiveness, one can better identify future directions and more efficiently determine the merits of ideas.

Information Retrieval Learning-To-Rank +1

An Approximate Algorithm for Maximum Inner Product Search over Streaming Sparse Vectors

no code implementations25 Jan 2023 Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty

To achieve optimal memory footprint and query latency, they rely on the near stationarity of documents and on laws governing natural languages.

Information Retrieval Retrieval

Caching Historical Embeddings in Conversational Search

no code implementations25 Nov 2022 Ophir Frieder, Ida Mele, Cristina Ioana Muntean, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto

Our achieved high cache hit rates significantly improve the responsiveness of conversational systems while likewise reducing the number of queries managed on the search back-end.

Conversational Search Document Embedding +1

ILMART: Interpretable Ranking with Constrained LambdaMART

1 code implementation1 Jun 2022 Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Alberto Veneri

Interpretable Learning to Rank (LtR) is an emerging field within the research area of explainable AI, aiming at developing intelligible and accurate predictive models.

Learning-To-Rank

An Optimal Algorithm for Finding Champions in Tournament Graphs

1 code implementation26 Nov 2021 Lorenzo Beretta, Franco Maria Nardini, Roberto Trani, Rossano Venturini

In this paper, we address the problem of finding a champion of the tournament, also known as Copeland winner, which is a player that wins the highest number of matches.

Conversational Search Information Retrieval +4

Learning Early Exit Strategies for Additive Ranking Ensembles

1 code implementation6 May 2021 Francesco Busolin, Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Salvatore Trani

Modern search engine ranking pipelines are commonly based on large machine-learned ensembles of regression trees.

Dynamic Hard Pruning of Neural Networks at the Edge of the Internet

no code implementations17 Nov 2020 Lorenzo Valerio, Franco Maria Nardini, Andrea Passarella, Raffaele Perego

Results show that DynHP compresses a NN up to $10$ times without significant performance drops (up to $3. 5\%$ additional error w. r. t.

Edge-computing

Query-level Early Exit for Additive Learning-to-Rank Ensembles

no code implementations30 Apr 2020 Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Salvatore Trani

In this paper, we investigate the novel problem of \textit{query-level early exiting}, aimed at deciding the profitability of early stopping the traversal of the ranking ensemble for all the candidate documents to be scored for a query, by simply returning a ranking based on the additive scores computed by a limited portion of the ensemble.

Learning-To-Rank

Expansion via Prediction of Importance with Contextualization

1 code implementation29 Apr 2020 Sean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, Ophir Frieder

We also observe that the performance is additive with the current leading first-stage retrieval methods, further narrowing the gap between inexpensive and cost-prohibitive passage ranking approaches.

Language Modeling Language Modelling +4

Training Curricula for Open Domain Answer Re-Ranking

1 code implementation29 Apr 2020 Sean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, Ophir Frieder

We show that the proposed heuristics can be used to build a training curriculum that down-weights difficult samples early in the training process.

Re-Ranking

Cannot find the paper you are looking for? You can Submit a new open access paper.