Search Results for author: Jianmo Ni

Found 33 papers, 17 papers with code

Multi-stage Training with Improved Negative Contrast for Neural Passage Retrieval

no code implementations EMNLP 2021 Jing Lu, Gustavo Hernandez Abrego, Ji Ma, Jianmo Ni, Yinfei Yang

In the context of neural passage retrieval, we study three promising techniques: synthetic data generation, negative sampling, and fusion.

Passage Retrieval Retrieval +1

Interview: Large-scale Modeling of Media Dialog with Discourse Patterns and Knowledge Grounding

no code implementations EMNLP 2020 Bodhisattwa Prasad Majumder, Shuyang Li, Jianmo Ni, Julian McAuley

In this work, we perform the first large-scale analysis of discourse in media dialog and its impact on generative modeling of dialog turns, with a focus on interrogative patterns and use of external knowledge.

Leveraging LLMs for Synthesizing Training Data Across Many Languages in Multilingual Dense Retrieval

1 code implementation10 Nov 2023 Nandan Thakur, Jianmo Ni, Gustavo Hernández Ábrego, John Wieting, Jimmy Lin, Daniel Cer

There has been limited success for dense retrieval models in multilingual retrieval, due to uneven and scarce training data available across multiple languages.

Language Modelling Large Language Model +1

Farzi Data: Autoregressive Data Distillation

no code implementations15 Oct 2023 Noveen Sachdeva, Zexue He, Wang-Cheng Kang, Jianmo Ni, Derek Zhiyuan Cheng, Julian McAuley

We study data distillation for auto-regressive machine learning tasks, where the input and output have a strict left-to-right causal structure.

Language Modelling Sequential Recommendation

Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction

no code implementations10 May 2023 Wang-Cheng Kang, Jianmo Ni, Nikhil Mehta, Maheswaran Sathiamoorthy, Lichan Hong, Ed Chi, Derek Zhiyuan Cheng

In this paper, we conduct a thorough examination of both CF and LLMs within the classic task of user rating prediction, which involves predicting a user's rating for a candidate item based on their past ratings.

Collaborative Filtering World Knowledge

HYRR: Hybrid Infused Reranking for Passage Retrieval

no code implementations20 Dec 2022 Jing Lu, Keith Hall, Ji Ma, Jianmo Ni

We present Hybrid Infused Reranking for Passages Retrieval (HYRR), a framework for training rerankers based on a hybrid of BM25 and neural retrieval models.

Passage Retrieval Retrieval

RISE: Leveraging Retrieval Techniques for Summarization Evaluation

no code implementations17 Dec 2022 David Uthus, Jianmo Ni

RISE is first trained as a retrieval task using a dual-encoder retrieval setup, and can then be subsequently utilized for evaluating a generated summary given an input document, without gold reference summaries.

Information Retrieval Retrieval

RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses

no code implementations12 Oct 2022 Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, Michael Bendersky

Recently, substantial progress has been made in text ranking based on pretrained language models such as BERT.


Promptagator: Few-shot Dense Retrieval From 8 Examples

no code implementations23 Sep 2022 Zhuyun Dai, Vincent Y. Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, Ming-Wei Chang

To amplify the power of a few examples, we propose Prompt-base Query Generation for Retriever (Promptagator), which leverages large language models (LLM) as a few-shot query generator, and creates task-specific retrievers based on the generated data.

Information Retrieval Natural Questions +1

Knowledge-aware Neural Collective Matrix Factorization for Cross-domain Recommendation

no code implementations27 Jun 2022 Li Zhang, Yan Ge, Jun Ma, Jianmo Ni, Haiping Lu

In this paper, we propose to incorporate the knowledge graph (KG) for CDR, which enables items in different domains to share knowledge.

General Knowledge

Exploring Dual Encoder Architectures for Question Answering

1 code implementation14 Apr 2022 Zhe Dong, Jianmo Ni, Daniel M. Bikel, Enrique Alfonseca, YuAn Wang, Chen Qu, Imed Zitouni

We further explore and explain why parameter sharing in projection layer significantly improves the efficacy of the dual encoders, by directly probing the embedding spaces of the two encoder towers with t-SNE algorithm.

Information Retrieval Question Answering +1

Transformer Memory as a Differentiable Search Index

1 code implementation14 Feb 2022 Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W. Cohen, Donald Metzler

In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model.

Information Retrieval Retrieval

Large Dual Encoders Are Generalizable Retrievers

2 code implementations15 Dec 2021 Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Y. Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, Yinfei Yang

With multi-stage training, surprisingly, scaling up the model size brings significant improvement on a variety of retrieval tasks, especially for out-of-domain generalization.

Domain Generalization Retrieval +1

ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning

3 code implementations ICLR 2022 Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, Donald Metzler

Despite the recent success of multi-task learning and transfer learning for natural language processing (NLP), few works have systematically studied the effect of scaling up the number of tasks during pre-training.

Denoising Multi-Task Learning

Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models

2 code implementations Findings (ACL) 2022 Jianmo Ni, Gustavo Hernández Ábrego, Noah Constant, Ji Ma, Keith B. Hall, Daniel Cer, Yinfei Yang

To support our investigation, we establish a new sentence representation transfer benchmark, SentGLUE, which extends the SentEval toolkit to nine tasks from the GLUE benchmark.

Contrastive Learning Decoder +4

SHARE: a System for Hierarchical Assistive Recipe Editing

1 code implementation17 May 2021 Shuyang Li, Yufei Li, Jianmo Ni, Julian McAuley

The large population of home cooks with dietary restrictions is under-served by existing cooking resources and recipe generation models.

Recipe Generation

Neural Passage Retrieval with Improved Negative Contrast

no code implementations23 Oct 2020 Jing Lu, Gustavo Hernandez Abrego, Ji Ma, Jianmo Ni, Yinfei Yang

In this paper we explore the effects of negative sampling in dual encoder models used to retrieve passages for automatic question answering.

Open-Domain Question Answering Passage Retrieval +3

Learning Visual-Semantic Embeddings for Reporting Abnormal Findings on Chest X-rays

no code implementations Findings of the Association for Computational Linguistics 2020 Jianmo Ni, Chun-Nan Hsu, Amilcare Gentili, Julian McAuley

In this work, we focus on reporting abnormal findings on radiology images; instead of training on complete radiology reports, we propose a method to identify abnormal findings from the reports in addition to grouping them with unsupervised clustering and minimal rules.

Clustering Cross-Modal Retrieval +3

Interview: A Large-Scale Open-Source Corpus of Media Dialog

no code implementations7 Apr 2020 Bodhisattwa Prasad Majumder, Shuyang Li, Jianmo Ni, Julian McAuley

Compared to existing large-scale proxies for conversational data, language models trained on our dataset exhibit better zero-shot out-of-domain performance on existing spoken dialog datasets, demonstrating its usefulness in modeling real-world conversations.

Addressing Marketing Bias in Product Recommendations

1 code implementation4 Dec 2019 Mengting Wan, Jianmo Ni, Rishabh Misra, Julian McAuley

However, these interactions can be biased by how the product is marketed, for example due to the selection of a particular human model in a product image.

Collaborative Filtering Fairness +2

Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects

no code implementations IJCNLP 2019 Jianmo Ni, Jiacheng Li, Julian McAuley

Several recent works have considered the problem of generating reviews (or {`}tips{'}) as a form of explanation as to why a recommendation might match a customer{'}s interests.

Decision Making Language Modelling

Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation

1 code implementation IJCNLP 2019 Liliang Ren, Jianmo Ni, Julian McAuley

Experiments on both the multi-domain and the single domain dialogue state tracking dataset show that our model not only scales easily with the increasing number of pre-defined domains and slots but also reaches the state-of-the-art performance.

Decoder Dialogue State Tracking +1

Generating Personalized Recipes from Historical User Preferences

1 code implementation IJCNLP 2019 Bodhisattwa Prasad Majumder, Shuyang Li, Jianmo Ni, Julian McAuley

Existing approaches to recipe generation are unable to create recipes for users with culinary preferences but incomplete knowledge of ingredients in specific dishes.

Recipe Generation Text Generation

Estimating Reactions and Recommending Products with Generative Models of Reviews

no code implementations IJCNLP 2017 Jianmo Ni, Zachary C. Lipton, Sharad Vikram, Julian McAuley

Natural language approaches that model information like product reviews have proved to be incredibly useful in improving the performance of such methods, as reviews provide valuable auxiliary information that can be used to better estimate latent user preferences and item properties.

Collaborative Filtering Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.