no code implementations • 4 Jun 2023 • Sheshera Mysore, Andrew McCallum, Hamed Zamani
Narrative-driven recommendation (NDR) presents an information access problem where users solicit recommendations with verbose descriptions of their preferences and context, for example, travelers soliciting recommendations for points of interest while describing their likes/dislikes and travel circumstances.
no code implementations • 15 May 2023 • Zhiqi Huang, Hansi Zeng, Hamed Zamani, James Allan
In this work, we explore a Multilingual Information Retrieval (MLIR) task, where the collection includes documents in multiple languages.
Cross-Lingual Information Retrieval
Knowledge Distillation
+1
no code implementations • 27 Apr 2023 • Hamed Zamani, Michael Bendersky
Instead of learning a vector for each query and document, our framework learns a multivariate distribution and uses negative multivariate KL divergence to compute the similarity between distributions.
1 code implementation • 26 Apr 2023 • Hansi Zeng, Surya Kallumadi, Zaid Alibadi, Rodrigo Nogueira, Hamed Zamani
Developing a universal model that can efficiently and effectively respond to a wide range of information access requests -- from retrieval to recommendation to question answering -- has been a long-lasting goal in the information retrieval community.
1 code implementation • 26 Apr 2023 • Alireza Salemi, Juan Altmayer Pizzorno, Hamed Zamani
Utilizing the passages retrieved by DEDR, we further introduce MM-FiD, an encoder-decoder multi-modal fusion-in-decoder model, for generating a textual answer for KI-VQA tasks.
no code implementations • 22 Apr 2023 • Alireza Salemi, Sheshera Mysore, Michael Bendersky, Hamed Zamani
This paper highlights the importance of personalization in the current state of natural language understanding and generation and introduces the LaMP benchmark -- a novel benchmark for training and evaluating language models for producing personalized outputs.
no code implementations • 18 Apr 2023 • Yen-Chieh Lien, Hamed Zamani, W. Bruce Croft
To address this issue, one can train NRMs via weak supervision, where a large dataset is automatically generated using an existing ranking model (called the weak labeler) for training NRMs.
no code implementations • 9 Apr 2023 • Sheshera Mysore, Mahmood Jasim, Andrew McCallum, Hamed Zamani
Finally, we implement LACE in an interactive controllable recommender system and conduct a user study to demonstrate that users are able to improve the quality of recommendations they receive through interactions with an editable user profile.
no code implementations • 21 Dec 2022 • Ruicheng Xian, Honglei Zhuang, Zhen Qin, Hamed Zamani, Jing Lu, Ji Ma, Kai Hui, Han Zhao, Xuanhui Wang, Michael Bendersky
Domain adaptation aims to transfer the knowledge acquired by models trained on (data-rich) source domains to (low-resource) target domains, for which a popular method is invariant representation learning.
1 code implementation • 28 Oct 2022 • Andrew Drozdov, Shufan Wang, Razieh Rahimi, Andrew McCallum, Hamed Zamani, Mohit Iyyer
Retrieval-enhanced language models (LMs), which condition their predictions on text retrieved from large external datastores, have recently shown significant perplexity improvements compared to standard LMs.
no code implementations • 28 Sep 2022 • Sebastian Hofstätter, Jiecao Chen, Karthik Raman, Hamed Zamani
Retrieval-augmented generation models offer many benefits over standalone language models: besides a textual answer to a given query they provide provenance items retrieved from an updateable knowledge base.
no code implementations • 7 Jul 2022 • Sebastian Hofstätter, Jiecao Chen, Karthik Raman, Hamed Zamani
This paper studies multi-task training of retrieval-augmented generation models for knowledge-intensive tasks.
no code implementations • 26 Jun 2022 • Sebastian Hofstätter, Nick Craswell, Bhaskar Mitra, Hamed Zamani, Allan Hanbury
Recently, several dense retrieval (DR) models have demonstrated competitive performance to term-based retrieval that are ubiquitous in search systems.
no code implementations • 9 Jun 2022 • Leila Tavakoli, Johanne R. Trippas, Hamed Zamani, Falk Scholer, Mark Sanderson
Asking clarification questions is an active area of research; however, resources for training and evaluating search clarification methods are not sufficient.
no code implementations • 2 May 2022 • Hamed Zamani, Fernando Diaz, Mostafa Dehghani, Donald Metzler, Michael Bendersky
Although information access systems have long supported people in accomplishing a wide range of tasks, we propose broadening the scope of users of information access systems to include task-driven machines, such as machine learning models.
1 code implementation • 28 Apr 2022 • Hansi Zeng, Hamed Zamani, Vishwa Vinay
Recent work has shown that more effective dense retrieval models can be obtained by distilling ranking knowledge from an existing base re-ranking model.
no code implementations • 21 Jan 2022 • Hamed Zamani, Johanne R. Trippas, Jeff Dalton, Filip Radlinski
Conversational information seeking (CIS) is concerned with a sequence of interactions between one or more users and an information system.
no code implementations • 2 Nov 2021 • Razieh Rahimi, Youngwoo Kim, Hamed Zamani, James Allan
GenEx explains a search result by providing a terse description for the query aspect covered by that result.
no code implementations • NAACL 2022 • Neha Kennard, Tim O'Gorman, Rajarshi Das, Akshay Sharma, Chhandak Bagchi, Matthew Clinton, Pranay Kumar Yelugam, Hamed Zamani, Andrew McCallum
At the foundation of scientific evaluation is the labor-intensive process of peer review.
1 code implementation • 13 Sep 2021 • Mohammad Aliannejadi, Leif Azzopardi, Hamed Zamani, Evangelos Kanoulas, Paul Thomas, Nick Craswel
In this paper, we present a model for conversational search -- from which we instantiate different observed conversational search strategies, where the agent elicits: (i) Feedback-First, or (ii) Feedback-After.
no code implementations • 17 Jun 2021 • Rosie Jones, Hamed Zamani, Markus Schedl, Ching-Wei Chen, Sravana Reddy, Ann Clifton, Jussi Karlgren, Helia Hashemi, Aasish Pappu, Zahra Nazari, Longqi Yang, Oguz Semerci, Hugues Bouchard, Ben Carterette
Podcasts are spoken documents across a wide-range of genres and styles, with growing listenership across the world, and a rapidly lowering barrier to entry for both listeners and creators.
1 code implementation • 20 May 2021 • Sebastian Hofstätter, Bhaskar Mitra, Hamed Zamani, Nick Craswell, Allan Hanbury
An emerging recipe for achieving state-of-the-art effectiveness in neural document re-ranking involves utilizing large pre-trained language models - e. g., BERT - to evaluate all individual passages in the document and then aggregating the outputs by pooling or additional Transformer layers.
1 code implementation • 9 May 2021 • Chen Qu, Hamed Zamani, Liu Yang, W. Bruce Croft, Erik Learned-Miller
We first conduct sparse retrieval with BM25 and study expanding the question with object names and image captions.
no code implementations • 19 Apr 2021 • Bhaskar Mitra, Sebastian Hofstatter, Hamed Zamani, Nick Craswell
The Transformer-Kernel (TK) model has demonstrated strong reranking performance on the TREC Deep Learning benchmark -- and can be considered to be an efficient (but slightly less effective) alternative to other Transformer-based architectures that employ (i) large-scale pretraining (high training cost), (ii) joint encoding of query and document (high inference cost), and (iii) larger number of Transformer layers (both high training and high inference costs).
1 code implementation • 24 Mar 2021 • Sheshera Mysore, Tim O'Gorman, Andrew McCallum, Hamed Zamani
Query by Example is a well-known information retrieval task in which a document is chosen by the user as the search query and the goal is to retrieve relevant documents from a large collection.
no code implementations • 18 Jan 2021 • Jaime Arguello, Adam Ferguson, Emery Fine, Bhaskar Mitra, Hamed Zamani, Fernando Diaz
Using movie search as a case study, we explore the characteristics of questions posed by searchers in TOT states in a community question answering website.
1 code implementation • 9 Jan 2021 • Mohammad Aliannejadi, Hamed Zamani, Fabio Crestani, W. Bruce Croft
Here we focus on context-aware models to leverage the rich contextual information available to mobile devices.
no code implementations • 14 Nov 2020 • Bhaskar Mitra, Sebastian Hofstatter, Hamed Zamani, Nick Craswell
We benchmark Conformer-Kernel models under the strict blind evaluation setting of the TREC 2020 Deep Learning track.
1 code implementation • 20 Jul 2020 • Bhaskar Mitra, Sebastian Hofstatter, Hamed Zamani, Nick Craswell
In this work, we extend the TK architecture to the full retrieval setting by incorporating the query term independence assumption.
1 code implementation • 17 Jun 2020 • Hamed Zamani, Gord Lueck, Everest Chen, Rodolfo Quispe, Flint Luu, Nick Craswell
In this paper, we introduce MIMICS, a collection of search clarification datasets for real web search queries sampled from the Bing query logs.
no code implementations • 13 Jun 2020 • Helia Hashemi, Hamed Zamani, W. Bruce Croft
Asking clarifying questions in response to ambiguous or faceted queries has been recognized as a useful technique for various information retrieval systems, especially conversational search systems with limited bandwidth interfaces.
no code implementations • 30 May 2020 • Hamed Zamani, Bhaskar Mitra, Everest Chen, Gord Lueck, Fernando Diaz, Paul N. Bennett, Nick Craswell, Susan T. Dumais
We also propose a model for learning representation for clarifying questions based on the user interaction data as implicit feedback.
1 code implementation • 11 May 2020 • Sebastian Hofstätter, Hamed Zamani, Bhaskar Mitra, Nick Craswell, Allan Hanbury
In this work, we propose a local self-attention which considers a moving window over the document terms and for each term attends only to other terms in the same window.
no code implementations • 19 Jan 2020 • Krisztian Balog, Lucie Flekova, Matthias Hagen, Rosie Jones, Martin Potthast, Filip Radlinski, Mark Sanderson, Svitlana Vakulenko, Hamed Zamani
This paper discusses the potential for creating academic resources (tools, data, and evaluation approaches) to support research in conversational search, by focusing on realistic information needs and conversational interactions.
1 code implementation • 18 Dec 2019 • Hamed Zamani, Nick Craswell
Such research will require data and tools, to allow the implementation and study of conversational systems.
no code implementations • WS 2019 • Ameya Godbole, Dilip Kavarthapu, Rajarshi Das, Zhiyu Gong, Abhishek Singhal, Hamed Zamani, Mo Yu, Tian Gao, Xiaoxiao Guo, Manzil Zaheer, Andrew McCallum
Multi-hop question answering (QA) requires an information retrieval (IR) system that can find \emph{multiple} supporting evidence needed to answer the question, making the retrieval process very challenging.
no code implementations • 19 Aug 2019 • Yashar Deldjoo, Vito Walter Anelli, Hamed Zamani, Alejandro Bellogin, Tommaso Di Noia
We present a probabilistic framework based on generalized cross entropy to evaluate fairness of recommender systems under this perspective, where we show that the proposed framework is flexible and explanatory by allowing to incorporate domain knowledge (through an ideal fair distribution) that can help to understand which item or user aspects a recommendation algorithm is over- or under-representing.
2 code implementations • 15 Jul 2019 • Mohammad Aliannejadi, Hamed Zamani, Fabio Crestani, W. Bruce Croft
In this paper, we formulate the task of asking clarifying questions in open-domain information-seeking conversational systems.
1 code implementation • 22 May 2019 • Helia Hashemi, Mohammad Aliannejadi, Hamed Zamani, W. Bruce Croft
Despite the importance of the task, the community still feels the significant lack of large-scale non-factoid question answering collections with real questions and comprehensive relevance judgments.
no code implementations • 5 May 2019 • Harshith Padigela, Hamed Zamani, W. Bruce Croft
The bidirectional encoder representations from transformers (BERT) model has recently advanced the state-of-the-art in passage re-ranking.
no code implementations • 16 Mar 2019 • Jiafeng Guo, Yixing Fan, Liang Pang, Liu Yang, Qingyao Ai, Hamed Zamani, Chen Wu, W. Bruce Croft, Xue-Qi Cheng
Ranking models lie at the heart of research on information retrieval (IR).
1 code implementation • 27th ACM International Conference on Information and Knowledge Management (CIKM '18) 2018 • Hamed Zamani, Mostafa Dehghani, W. Bruce Croft, Erik Learned-Miller, and Jaap Kamps
In this work, we propose a standalone neural ranking model (SNRM) by introducing a sparsity property to learn a latent sparse representation for each query and document.
Ranked #12 on
Ad-Hoc Information Retrieval
on TREC Robust04
no code implementations • 2 Oct 2018 • Hamed Zamani, Markus Schedl, Paul Lamere, Ching-Wei Chen
We further report and analyze the results obtained by the top performing teams in each track and explore the approaches taken by the winners.
no code implementations • 17 Jul 2017 • Liu Yang, Hamed Zamani, Yongfeng Zhang, Jiafeng Guo, W. Bruce Croft
We further evaluate the neural matching models in the next question prediction task in conversations.
no code implementations • 9 May 2017 • Hamed Zamani, W. Bruce Croft
This is the motivation for developing unsupervised relevance-based word embedding models that learn word representations based on query-document relevance information.
1 code implementation • 28 Apr 2017 • Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, W. Bruce Croft
Our experiments indicate that employing proper objective functions and letting the networks to learn the input representation based on weakly supervised data leads to impressive performance, with over 13% and 35% MAP improvements over the BM25 model on the Robust and the ClueWeb collections.
Ranked #8 on
Ad-Hoc Information Retrieval
on TREC Robust04
(MAP metric)
no code implementations • 29 Jan 2015 • Hamed Zamani, Azadeh Shakery, Pooya Moradi
In this paper, we consider a tweet containing a rating for a movie as an instance and focus on ranking the instances of each user based on their engagement, i. e., the total number of retweets and favorites it will gain.