no code implementations • NAACL (TrustNLP) 2022 • Minghan Li, Xueguang Ma, Jimmy Lin
The bi-encoder design of dense passage retriever (DPR) is a key factor to its success in open-domain question answering (QA), yet it is unclear how DPR’s question encoder and passage encoder individually contributes to overall performance, which we refer to as the encoder attribution problem.
no code implementations • EMNLP (BlackboxNLP) 2021 • Zhiying Jiang, Raphael Tang, Ji Xin, Jimmy Lin
Fine-tuned pre-trained transformers achieve the state of the art in passage reranking.
1 code implementation • Findings (EMNLP) 2021 • Minghan Li, Ming Li, Kun Xiong, Jimmy Lin
Our method reaches state-of-the-art performance on 5 benchmark QA datasets, with up to 10% improvement in top-100 accuracy compared to a joint-training multi-task DPR on SQuAD.
no code implementations • EMNLP (sustainlp) 2020 • Xinyu Zhang, Andrew Yates, Jimmy Lin
Researchers have proposed simple yet effective techniques for the retrieval problem based on using BERT as a relevance classifier to rerank initial candidates from keyword search.
1 code implementation • EMNLP (sustainlp) 2020 • Ji Xin, Rodrigo Nogueira, YaoLiang Yu, Jimmy Lin
Pre-trained language models such as BERT have shown their effectiveness in various tasks.
no code implementations • ACL (NLP4Prog) 2021 • Xinyu Zhang, Ji Xin, Andrew Yates, Jimmy Lin
The task of semantic code search is to retrieve code snippets from a source code corpus based on an information need expressed in natural language.
no code implementations • EMNLP (sustainlp) 2021 • Yue Zhang, ChengCheng Hu, Yuqi Liu, Hui Fang, Jimmy Lin
It is well known that rerankers built on pretrained transformer models such as BERT have dramatically improved retrieval effectiveness in many tasks.
1 code implementation • Findings (EMNLP) 2021 • Anup Anand Deshmukh, Qianqiu Zhang, Ming Li, Jimmy Lin, Lili Mou
In this paper, we address unsupervised chunking as a new task of syntactic structure induction, which is helpful for understanding the linguistic structures of human languages as well as processing low-resource languages.
no code implementations • EMNLP 2021 • Raphael Tang, Karun Kumar, Kendra Chalkley, Ji Xin, Liming Zhang, Wenyan Li, Gefei Yang, Yajie Mao, Junho Shin, Geoffrey Craig Murray, Jimmy Lin
Query auto completion (QAC) is the task of predicting a search engine user’s final query from their intermediate, incomplete query.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
no code implementations • EMNLP 2021 • Xueguang Ma, Minghan Li, Kai Sun, Ji Xin, Jimmy Lin
Recent work has shown that dense passage retrieval techniques achieve better ranking accuracy in open-domain question answering compared to sparse retrieval techniques such as BM25, but at the cost of large space and memory requirements.
1 code implementation • EMNLP (MRL) 2021 • Kelechi Ogueji, Yuxin Zhu, Jimmy Lin
In this work, we challenge this assumption and present the first attempt at training a multilingual language model on only low-resource languages.
no code implementations • EMNLP (MRL) 2021 • Peng Shi, Rui Zhang, He Bai, Jimmy Lin
Dense retrieval has shown great success for passage ranking in English.
no code implementations • EMNLP (sdp) 2020 • Shane Ding, Edwin Zhang, Jimmy Lin
Cydex is a platform that provides neural search infrastructure for domain-specific scholarly literature.
no code implementations • ACL (RepL4NLP) 2021 • Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin
We present an efficient training approach to text retrieval with dense representations that applies knowledge distillation using the ColBERT late-interaction ranking model.
1 code implementation • 2 Jun 2023 • Aleksandra Piktus, Odunayo Ogundepo, Christopher Akiki, Akintunde Oladipo, Xinyu Zhang, Hailey Schoelkopf, Stella Biderman, Martin Potthast, Jimmy Lin
We discuss how Pyserini - a widely used toolkit for reproducible IR research can be integrated with the Hugging Face ecosystem of open-source AI libraries and artifacts.
no code implementations • 23 May 2023 • Vanessa Liao, Syed Shariyar Murtaza, Yifan Nie, Jimmy Lin
Our experiments on real scenario production data show that this method of fine tuning improves the downstream text classification tasks as compared to fine tuning only on domain specific text.
no code implementations • 19 May 2023 • Ronak Pradeep, Kai Hui, Jai Gupta, Adam D. Lelkes, Honglei Zhuang, Jimmy Lin, Donald Metzler, Vinh Q. Tran
Popularized by the Differentiable Search Index, the emerging paradigm of generative retrieval re-frames the classic information retrieval problem into a sequence-to-sequence modeling task, forgoing external indices and encoding an entire document corpus within a single Transformer.
no code implementations • 14 May 2023 • Josh Seltzer, Jiahua, Pan, Kathy Cheng, Yuxiao Sun, Santosh Kolagati, Jimmy Lin, Shi Zong
Market research surveys are a powerful methodology for understanding consumer perspectives at scale, but are limited by depth of understanding and insights.
no code implementations • 10 May 2023 • Ehsan Kamalloo, Xinyu Zhang, Odunayo Ogundepo, Nandan Thakur, David Alfonso-Hermelo, Mehdi Rezagholizadeh, Jimmy Lin
The ever-increasing size of language models curtails their widespread access to the community, thereby galvanizing many companies and startups into offering access to large language models through APIs.
no code implementations • 3 May 2023 • Xueguang Ma, Xinyu Zhang, Ronak Pradeep, Jimmy Lin
Supervised ranking methods based on bi-encoder or cross-encoder architectures have shown success in multi-stage text ranking tasks, but they require large amounts of relevance judgments as training data.
no code implementations • 24 Apr 2023 • Xueguang Ma, Tommaso Teofili, Jimmy Lin
With Pyserini, which provides a Python interface to Anserini, users gain access to both sparse and dense retrieval models, as Pyserini implements bindings to the Faiss vector search library alongside Lucene inverted indexes in a uniform, consistent interface.
2 code implementations • 4 Apr 2023 • Jheng-Hong Yang, Carlos Lassance, Rafael Sampaio de Rezende, Krishna Srinivasan, Miriam Redi, Stéphane Clinchant, Jimmy Lin
This paper presents the AToMiC (Authoring Tools for Multimedia Content) dataset, designed to advance research in image/text cross-modal retrieval.
no code implementations • 3 Apr 2023 • Jimmy Lin, David Alfonso-Hermelo, Vitor Jeronymo, Ehsan Kamalloo, Carlos Lassance, Rodrigo Nogueira, Odunayo Ogundepo, Mehdi Rezagholizadeh, Nandan Thakur, Jheng-Hong Yang, Xinyu Zhang
The advent of multilingual language models has generated a resurgence of interest in cross-lingual information retrieval (CLIR), which is the task of searching documents in one language with queries from another.
1 code implementation • 28 Feb 2023 • Christopher Akiki, Odunayo Ogundepo, Aleksandra Piktus, Xinyu Zhang, Akintunde Oladipo, Jimmy Lin, Martin Potthast
We present Spacerini, a modular framework for seamless building and deployment of interactive search applications, designed to facilitate the qualitative analysis of large scale research datasets.
1 code implementation • 15 Feb 2023 • Sheng-Chieh Lin, Akari Asai, Minghan Li, Barlas Oguz, Jimmy Lin, Yashar Mehdad, Wen-tau Yih, Xilun Chen
We hence propose a new DA approach with diverse queries and sources of supervision to progressively train a generalizable DR. As a result, DRAGON, our dense retriever trained with diverse augmentation, is the first BERT-base-sized DR to achieve state-of-the-art effectiveness in both supervised and zero-shot evaluations and even competes with models using more complex late interaction (ColBERTv2 and SPLADE++).
1 code implementation • 13 Feb 2023 • Minghan Li, Sheng-Chieh Lin, Xueguang Ma, Jimmy Lin
Multi-vector retrieval methods have demonstrated their effectiveness on various retrieval datasets, and among them, ColBERT is the most established method based on the late interaction of contextualized token embeddings of pre-trained language models.
no code implementations • 13 Feb 2023 • Xinyu Zhang, Minghan Li, Jimmy Lin
Recent progress in information retrieval finds that embedding query and document representation into multi-vector yields a robust bi-encoder retriever on out-of-distribution datasets.
no code implementations • 17 Jan 2023 • Shi Zong, Josh Seltzer, Jiahua, Pan, Kathy Cheng, Jimmy Lin
Industry practitioners always face the problem of choosing the appropriate model for deployment under different considerations, such as to maximize a metric that is crucial for production, or to reduce the total cost given financial concerns.
1 code implementation • 27 Dec 2022 • Jimmy Lin
Reproducibility is an ideal that no researcher would dispute "in the abstract", but when aspirations meet the cold hard reality of the academic grind, reproducibility often "loses out".
1 code implementation • 20 Dec 2022 • Luyu Gao, Xueguang Ma, Jimmy Lin, Jamie Callan
Given a query, HyDE first zero-shot instructs an instruction-following language model (e. g. InstructGPT) to generate a hypothetical document.
no code implementations • 19 Dec 2022 • Zhiying Jiang, Matthew Y. R. Yang, Mikhail Tsirlin, Raphael Tang, Jimmy Lin
Our method also performs particularly well in few-shot settings where labeled data are too scarce for DNNs to achieve a satisfying accuracy.
no code implementations • 10 Dec 2022 • Yizhen Zhong, Jiajie Xiao, Thomas Vetterli, Mahan Matin, Ellen Loo, Jimmy Lin, Richard Bourgon, Ofer Shapira
The application of natural language processing (NLP) to cancer pathology reports has been focused on detecting cancer cases, largely ignoring precancerous cases.
no code implementations • 21 Nov 2022 • Raphael Tang, Karun Kumar, Gefei Yang, Akshat Pandey, Yajie Mao, Vladislav Belyaev, Madhuri Emmadi, Craig Murray, Ferhan Ture, Jimmy Lin
In this paper, we explore training and deploying an ASR system in the label-scarce, compute-limited setting.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
1 code implementation • 18 Nov 2022 • Minghan Li, Sheng-Chieh Lin, Barlas Oguz, Asish Ghoshal, Jimmy Lin, Yashar Mehdad, Wen-tau Yih, Xilun Chen
In this paper, we unify different multi-vector retrieval models from a token routing viewpoint and propose conditional token interaction via dynamic lexical routing, namely CITADEL, for efficient and effective multi-vector retrieval.
no code implementations • 1 Nov 2022 • Jimmy Lin
We evaluate this proposal and find that it can reduce the negative impact of noise added by differential privacy mechanisms on test accuracy by up to 24. 6%, and reduce the negative impact of gradient sparsification on test accuracy by up to 15. 1%.
no code implementations • 25 Oct 2022 • Peng Shi, Rui Zhang, He Bai, Jimmy Lin
We also include global translation exemplars for a target language to facilitate the translation process for large language models.
1 code implementation • 18 Oct 2022 • Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Mehdi Rezagholizadeh, Jimmy Lin
MIRACL (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual dataset we have built for the WSDM 2023 Cup challenge that focuses on ad hoc retrieval across 18 different languages, which collectively encompass over three billion native speakers around the world.
no code implementations • 13 Oct 2022 • Linqing Liu, Minghan Li, Jimmy Lin, Sebastian Riedel, Pontus Stenetorp
To balance these two considerations, we propose a combination of an effective filtering strategy and fusion of the retrieved documents based on the generation probability of each context.
no code implementations • 11 Oct 2022 • Odunayo Ogundepo, Xinyu Zhang, Jimmy Lin
However, only a handful of the 7000+ languages on the planet benefit from specialized, custom-built tokenization algorithms, while the other languages are stuck with a "default" whitespace tokenizer, which cannot capture the intricacies of different languages.
1 code implementation • 10 Oct 2022 • Raphael Tang, Linqing Liu, Akshat Pandey, Zhiying Jiang, Gefei Yang, Karun Kumar, Pontus Stenetorp, Jimmy Lin, Ferhan Ture
Large-scale diffusion neural networks represent a substantial milestone in text-to-image generation, but they remain poorly understood, lacking interpretability analyses.
no code implementations • 31 Jul 2022 • Ji Xin, Raphael Tang, Zhiying Jiang, YaoLiang Yu, Jimmy Lin
There exists a wide variety of efficiency methods for natural language processing (NLP) tasks, such as pruning, distillation, dynamic inference, quantization, etc.
1 code implementation • 31 Jul 2022 • Sheng-Chieh Lin, Minghan Li, Jimmy Lin
Pre-trained language models have been successful in many knowledge-intensive NLP tasks.
no code implementations • 23 Jun 2022 • Zhiying Jiang, Yiqin Dai, Ji Xin, Ming Li, Jimmy Lin
Most real-world problems that machine learning algorithms are expected to solve face the situation with 1) unknown data distribution; 2) little domain-specific knowledge; and 3) datasets with limited annotation.
1 code implementation • 20 Jun 2022 • Sheng-Chieh Lin, Jimmy Lin
In contrast, our work integrates lexical representations with dense semantic representations by densifying high-dimensional lexical representations into what we call low-dimensional dense lexical representations (DLRs).
1 code implementation • 23 May 2022 • Nandan Thakur, Nils Reimers, Jimmy Lin
In this work, we show that binary embedding models like BPR and JPQ can perform significantly worse than baselines once there is a domain-shift involved.
1 code implementation • 19 May 2022 • Minghan Li, Xinyu Zhang, Ji Xin, Hongyang Zhang, Jimmy Lin
For example, on MS MARCO Passage v1, our method yields an average candidate set size of 27 out of 1, 000 which increases the reranking speed by about 37 times, while the MRR@10 is greater than a pre-specified value of 0. 38 with about 90% empirical coverage and the empirical baselines fail to provide such guarantee.
no code implementations • 30 Apr 2022 • Hang Li, Shuai Wang, Shengyao Zhuang, Ahmed Mourad, Xueguang Ma, Jimmy Lin, Guido Zuccon
In this paper we consider the problem of combining the relevance signals from sparse and dense retrievers in the context of Pseudo Relevance Feedback (PRF).
no code implementations • 5 Apr 2022 • Xinyu Zhang, Kelechi Ogueji, Xueguang Ma, Jimmy Lin
Dense retrieval models using a transformer-based bi-encoder design have emerged as an active area of research.
1 code implementation • 21 Mar 2022 • Wei Zhong, Jheng-Hong Yang, Yuqing Xie, Jimmy Lin
With the recent success of dense retrieval methods based on bi-encoders, studies have applied this approach to various interesting downstream retrieval tasks with good efficiency and in-domain effectiveness.
Ranked #1 on
Math Information Retrieval
on ARQMath2 - Task 1
(using extra training data)
1 code implementation • 11 Mar 2022 • Luyu Gao, Xueguang Ma, Jimmy Lin, Jamie Callan
In this paper, we present Tevatron, a dense retrieval toolkit optimized for efficiency, flexibility, and code simplicity.
no code implementations • 26 Jan 2022 • Ellen M. Voorhees, Ian Soboroff, Jimmy Lin
Neural retrieval models are generally regarded as fundamentally different from the retrieval techniques used in the late 1990's when the TREC ad hoc test collections were constructed.
no code implementations • 17 Dec 2021 • Jheng-Hong Yang, Xueguang Ma, Jimmy Lin
Sparse lexical representation learning has demonstrated much progress in improving passage retrieval effectiveness in recent models such as DeepImpact, uniCOIL, and SPLADE.
1 code implementation • 13 Dec 2021 • Hang Li, Shengyao Zhuang, Ahmed Mourad, Xueguang Ma, Jimmy Lin, Guido Zuccon
Finally, we contribute a study of the generalisability of the ANCE-PRF method when dense retrievers other than ANCE are used for the first round of retrieval and for encoding the PRF signal.
1 code implementation • 9 Dec 2021 • Sheng-Chieh Lin, Jimmy Lin
Learned sparse and dense representations capture different successful approaches to text retrieval and the fusion of their results has proven to be more effective and robust.
no code implementations • 22 Oct 2021 • Joel Mackenzie, Andrew Trotman, Jimmy Lin
Recent advances in retrieval models based on learned sparse representations generated by transformers have led us to, once again, consider score-at-a-time query evaluation techniques for the top-k retrieval problem.
no code implementations • 4 Oct 2021 • Jimmy Lin
This paper outlines a conceptual framework for understanding recent developments in information retrieval and natural language processing that attempts to integrate dense and sparse retrieval methods.
no code implementations • 4 Oct 2021 • Minghan Li, Jimmy Lin
Previous work on generalization of DPR mainly focus on testing both encoders in tandem on out-of-distribution (OOD) question-answering (QA) tasks, which is also known as domain adaptation.
no code implementations • 3 Sep 2021 • Peng Shi, Rui Zhang, He Bai, Jimmy Lin
Dense retrieval has shown great success in passage ranking in English.
1 code implementation • EMNLP (MRL) 2021 • Xinyu Zhang, Xueguang Ma, Peng Shi, Jimmy Lin
We present Mr. TyDi, a multi-lingual benchmark dataset for mono-lingual retrieval in eleven typologically diverse languages, designed to evaluate ranking with learned dense representations.
no code implementations • ACL 2021 • Kelvin Jiang, Ronak Pradeep, Jimmy Lin
This work explores a framework for fact verification that leverages pretrained sequence-to-sequence transformer models for sentence selection and label prediction, two key sub-tasks in fact verification.
1 code implementation • ACL 2021 • Ji Xin, Raphael Tang, YaoLiang Yu, Jimmy Lin
To fill this void in the literature, we study in this paper selective prediction for NLP, comparing different models and confidence estimators.
no code implementations • 28 Jun 2021 • Jimmy Lin, Xueguang Ma
Recent developments in representational learning for information retrieval can be organized in a conceptual framework that establishes two pairs of contrasts: sparse vs. dense representations and unsupervised vs. learned representations.
no code implementations • 9 May 2021 • Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, Jimmy Lin
Evaluation efforts such as TREC, CLEF, NTCIR and FIRE, alongside public leaderboard such as MS MARCO, are intended to encourage research and track our progress, addressing big questions in our field.
no code implementations • EMNLP 2021 • Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin
This paper describes a compact and effective model for low-latency passage retrieval in conversational search based on learned dense representations.
3 code implementations • 14 Apr 2021 • Sebastian Hofstätter, Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin, Allan Hanbury
A vital step towards the widespread adoption of neural retrieval models is their resource efficiency throughout the training, indexing and query workflows.
Ranked #15 on
Zero-shot Text Search
on BEIR
1 code implementation • 12 Apr 2021 • Xueguang Ma, Kai Sun, Ronak Pradeep, Jimmy Lin
Text retrieval using learned dense representations has recently emerged as a promising alternative to "traditional" text retrieval using sparse bag-of-words representations.
1 code implementation • EACL 2021 • Ji Xin, Raphael Tang, YaoLiang Yu, Jimmy Lin
The slow speed of BERT has motivated much research on accelerating its inference, and the early exiting idea has been proposed to make trade-offs between model quality and efficiency.
no code implementations • 25 Feb 2021 • Jimmy Lin, Daniel Campos, Nick Craswell, Bhaskar Mitra, Emine Yilmaz
Leaderboards are a ubiquitous part of modern research in applied machine learning.
1 code implementation • 25 Feb 2021 • Rodrigo Nogueira, Zhiying Jiang, Jimmy Lin
In this work, we investigate if the surface form of a number has any influence on how sequence-to-sequence language models learn simple arithmetic tasks such as addition and subtraction across a wide range of values.
1 code implementation • 19 Feb 2021 • Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, Rodrigo Nogueira
Pyserini is an easy-to-use Python toolkit that supports replicable IR research by providing effective first-stage retrieval in a multi-stage ranking architecture.
Cultural Vocal Bursts Intensity Prediction
Information Retrieval
+1
1 code implementation • 14 Jan 2021 • Ronak Pradeep, Rodrigo Nogueira, Jimmy Lin
We propose a design pattern for tackling text ranking problems, dubbed "Expando-Mono-Duo", that has been empirically validated for a number of ad hoc retrieval tasks in different domains.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Zhiying Jiang, Raphael Tang, Ji Xin, Jimmy Lin
We show the effectiveness of our method in terms of attribution and the ability to provide insight into how information flows through layers.
no code implementations • COLING 2020 • Jheng-Hong Yang, Sheng-Chieh Lin, Rodrigo Nogueira, Ming-Feng Tsai, Chuan-Ju Wang, Jimmy Lin
While internalized {``}implicit knowledge{''} in pretrained transformers has led to fruitful progress in many natural language understanding tasks, how to most effectively elicit such knowledge remains an open question.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Peng Shi, He Bai, Jimmy Lin
We tackle the challenge of cross-lingual training of neural document ranking models for mono-lingual retrieval, specifically leveraging relevance judgments in English to improve search in non-English languages.
1 code implementation • 22 Oct 2020 • Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin
We present an approach to ranking with dense representations that applies knowledge distillation to improve the recently proposed late-interaction ColBERT model.
no code implementations • EACL (Louhi) 2021 • Ronak Pradeep, Xueguang Ma, Rodrigo Nogueira, Jimmy Lin
This work describes the adaptation of a pretrained sequence-to-sequence model to the task of scientific claim verification in the biomedical domain.
1 code implementation • 15 Oct 2020 • Martin Gauch, Frederik Kratzert, Daniel Klotz, Grey Nearing, Jimmy Lin, Sepp Hochreiter
Compared to naive prediction with a distinct LSTM per timescale, the multi-timescale architectures are computationally more efficient with no loss in accuracy.
1 code implementation • NAACL 2021 • Jimmy Lin, Rodrigo Nogueira, Andrew Yates
There are two themes that pervade our survey: techniques for handling long documents, beyond typical sentence-by-sentence processing in NLP, and techniques for addressing the tradeoff between effectiveness (i. e., result quality) and efficiency (e. g., query latency, model and index size).
2 code implementations • EMNLP (NLPOSS) 2020 • Raphael Tang, Jaejun Lee, Afsaneh Razi, Julia Cambre, Ian Bicking, Jofish Kaye, Jimmy Lin
We describe Howl, an open-source wake word detection toolkit with native support for open speech datasets, like Mozilla Common Voice and Google Speech Commands.
Ranked #4 on
Keyword Spotting
on Google Speech Commands
no code implementations • EACL 2021 • Mohan Zhang, Luchen Tan, Zhengkai Tu, Zihang Fu, Kun Xiong, Ming Li, Jimmy Lin
The contribution of this work is a novel data generation technique using distant supervision that allows us to start with a pretrained sequence-to-sequence model and fine-tune a paraphrase generator that exhibits this behavior, allowing user-controllable paraphrase generation.
1 code implementation • EMNLP (sdp) 2020 • Edwin Zhang, Nikhil Gupta, Raphael Tang, Xiao Han, Ronak Pradeep, Kuang Lu, Yue Zhang, Rodrigo Nogueira, Kyunghyun Cho, Hui Fang, Jimmy Lin
We present Covidex, a search engine that exploits the latest neural ranking models to provide information access to the COVID-19 Open Research Dataset curated by the Allen Institute for AI.
no code implementations • WS 2020 • Ashutosh Adhikari, Achyudh Ram, Raphael Tang, William L. Hamilton, Jimmy Lin
Fine-tuned variants of BERT are able to achieve state-of-the-art accuracy on many natural language processing tasks, although at significant computational costs.
no code implementations • ACL 2020 • Edwin Zhang, Nikhil Gupta, Rodrigo Nogueira, Kyunghyun Cho, Jimmy Lin
The Neural Covidex is a search engine that exploits the latest neural ranking architectures to provide information access to the COVID-19 Open Research Dataset (CORD-19) curated by the Allen Institute for AI.
2 code implementations • ICML 2020 • Jimmy Lin, Chudi Zhong, Diane Hu, Cynthia Rudin, Margo Seltzer
Decision tree optimization is notoriously difficult from a computational perspective but essential for the field of interpretable machine learning.
no code implementations • 5 Jun 2020 • Martin Gauch, Jimmy Lin
In recent years, the paradigms of data-driven science have become essential components of physical sciences, particularly in geophysical disciplines such as climatology.
no code implementations • 5 May 2020 • Sheng-Chieh Lin, Jheng-Hong Yang, Rodrigo Nogueira, Ming-Feng Tsai, Chuan-Ju Wang, Jimmy Lin
Conversational search plays a vital role in conversational information seeking.
1 code implementation • 30 Apr 2020 • He Bai, Peng Shi, Jimmy Lin, Yuqing Xie, Luchen Tan, Kun Xiong, Wen Gao, Ming Li
To verify this, we propose a segment-aware Transformer (Segatron), by replacing the original token position encoding with a combined position encoding of paragraph, sentence, and token.
Ranked #13 on
Language Modelling
on WikiText-103
1 code implementation • ACL 2020 • Raphael Tang, Jaejun Lee, Ji Xin, Xinyu Liu, Yao-Liang Yu, Jimmy Lin
In natural language processing, a recently popular line of work explores how to best report the experimental results of neural networks.
3 code implementations • ACL 2020 • Ji Xin, Raphael Tang, Jaejun Lee, Yao-Liang Yu, Jimmy Lin
Large-scale pre-trained language models such as BERT have brought significant improvements to NLP applications.
1 code implementation • 23 Apr 2020 • Raphael Tang, Rodrigo Nogueira, Edwin Zhang, Nikhil Gupta, Phuong Cam, Kyunghyun Cho, Jimmy Lin
We present CovidQA, the beginnings of a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle's COVID-19 Open Research Dataset Challenge.
1 code implementation • 10 Apr 2020 • Edwin Zhang, Nikhil Gupta, Rodrigo Nogueira, Kyunghyun Cho, Jimmy Lin
We present the Neural Covidex, a search engine that exploits the latest neural ranking architectures to provide information access to the COVID-19 Open Research Dataset curated by the Allen Institute for AI.
1 code implementation • ACL 2021 • He Bai, Peng Shi, Jimmy Lin, Luchen Tan, Kun Xiong, Wen Gao, Jie Liu, Ming Li
Experimental results show that the Chinese GPT2 can generate better essay endings with \eop.
no code implementations • 4 Apr 2020 • Sheng-Chieh Lin, Jheng-Hong Yang, Rodrigo Nogueira, Ming-Feng Tsai, Chuan-Ju Wang, Jimmy Lin
This paper presents an empirical study of conversational question reformulation (CQR) with sequence-to-sequence architectures and pretrained language models (PLMs).
no code implementations • 18 Mar 2020 • Sheng-Chieh Lin, Jheng-Hong Yang, Rodrigo Nogueira, Ming-Feng Tsai, Chuan-Ju Wang, Jimmy Lin
We applied the T5 sequence-to-sequence model to tackle the AI2 WinoGrande Challenge by decomposing each example into two input text strings, each containing a hypothesis, and using the probabilities assigned to the "entailment" token as a score of the hypothesis.
2 code implementations • 18 Mar 2020 • Jimmy Lin, Joel Mackenzie, Chris Kamphuis, Craig Macdonald, Antonio Mallia, Michał Siedlaczek, Andrew Trotman, Arjen de Vries
There exists a natural tension between encouraging a diverse ecosystem of open-source search engines and supporting fair, replicable comparisons across those systems.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Rodrigo Nogueira, Zhiying Jiang, Jimmy Lin
We investigate this observation further by varying target words to probe the model's use of latent knowledge.
Ranked #1 on
Ad-Hoc Information Retrieval
on TREC Robust04
1 code implementation • 5 Feb 2020 • Ruixue Zhang, Wei Yang, Luyun Lin, Zhengkai Tu, Yuqing Xie, Zihang Fu, Yuhao Xie, Luchen Tan, Kun Xiong, Jimmy Lin
Techniques for automatically extracting important content elements from business documents such as contracts, statements, and filings have the potential to make business operations more efficient.
no code implementations • 4 Feb 2020 • Jimmy Lin
This paper describes a working prototype that adapts Lucene, the world's most popular and most widely deployed open-source search library, to operate within a serverless environment in the cloud.
no code implementations • 23 Jan 2020 • Rodrigo Nogueira, Zhiying Jiang, Kyunghyun Cho, Jimmy Lin
Citation recommendation systems for the scientific literature, to help authors find papers that should be cited, have the potential to speed up discoveries and uncover new routes for scientific exploration.
1 code implementation • 15 Jan 2020 • Nick Ruest, Jimmy Lin, Ian Milligan, Samantha Fritz
The Archives Unleashed project aims to improve scholarly access to web archives through a multi-pronged strategy involving tool creation, process modeling, and community building - all proceeding concurrently in mutually-reinforcing efforts.
1 code implementation • 17 Nov 2019 • Martin Gauch, Juliane Mai, Jimmy Lin
Accurate streamflow prediction largely relies on historical meteorological records and streamflow measurements.
no code implementations • 15 Nov 2019 • Achyudh Ram, Ji Xin, Meiyappan Nagappan, Yao-Liang Yu, Rocío Cabrera Lozoya, Antonino Sabetta, Jimmy Lin
Public vulnerability databases such as CVE and NVD account for only 60% of security vulnerabilities present in open-source projects, and are known to suffer from inconsistent quality.
no code implementations • 9 Nov 2019 • Linqing Liu, Huan Wang, Jimmy Lin, Richard Socher, Caiming Xiong
Our approach is model agnostic and can be easily applied on different future teacher model architectures.
no code implementations • 8 Nov 2019 • Jaejun Lee, Raphael Tang, Jimmy Lin
We show that only a fourth of the final layers need to be fine-tuned to achieve 90% of the original quality.
no code implementations • 8 Nov 2019 • Peng Shi, Jimmy Lin
Recent work has shown the surprising ability of multi-lingual BERT to serve as a zero-shot cross-lingual transfer model for a number of language processing tasks.
no code implementations • 7 Nov 2019 • Yinan Zhang, Raphael Tang, Jimmy Lin
In this paper, we hypothesize that introducing an explicit, constrained pairwise word interaction mechanism to pretrained language models improves their effectiveness on semantic similarity tasks.
no code implementations • WS 2019 • Ryan Clancy, Ihab F. Ilyas, Jimmy Lin
We present a scalable, open-source platform that {``}distills{''} a potentially large text collection into a knowledge graph.
no code implementations • IJCNLP 2019 • Jinfeng Rao, Linqing Liu, Yi Tay, Wei Yang, Peng Shi, Jimmy Lin
A core problem of information retrieval (IR) is relevance matching, which is to rank documents by relevance to a user{'}s query.
no code implementations • IJCNLP 2019 • Zeynep Akkalyoncu Yilmaz, Wei Yang, Haotian Zhang, Jimmy Lin
This paper applies BERT to ad hoc document retrieval on news articles, which requires addressing two challenges: relevance judgments in existing test collections are typically provided only at the document level, and documents often exceed the length that BERT was designed to handle.
no code implementations • IJCNLP 2019 • Linqing Liu, Wei Yang, Jinfeng Rao, Raphael Tang, Jimmy Lin
Semantic similarity modeling is central to many NLP problems such as natural language inference and question answering.
1 code implementation • WS 2019 • Raphael Tang, Yao Lu, Jimmy Lin
Knowledge distillation can effectively transfer knowledge from BERT, a deep language representation model, to traditional, shallow word embedding-based neural networks, helping them approach or exceed the quality of other heavyweight language representation models.
1 code implementation • IJCNLP 2019 • Jaejun Lee, Raphael Tang, Jimmy Lin
Used for simple commands recognition on devices from smart speakers to mobile phones, keyword spotting systems are everywhere.
no code implementations • IJCNLP 2019 • Zeynep Akkalyoncu Yilmaz, Shengjin Wang, Wei Yang, Haotian Zhang, Jimmy Lin
We present Birch, a system that applies BERT to document retrieval via integration with the open-source Anserini information retrieval toolkit to demonstrate end-to-end search over large document collections.
no code implementations • IJCNLP 2019 • Ji Xin, Jimmy Lin, Yao-Liang Yu
Memory neurons of long short-term memory (LSTM) networks encode and process information in powerful yet mysterious ways.
2 code implementations • 31 Oct 2019 • Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, Jimmy Lin
The advent of deep neural networks pre-trained via language modeling tasks has spurred a number of successful applications in natural language processing.
no code implementations • 24 Oct 2019 • Jimmy Lin, Lori Paniak, Gordon Boerke
Experiments show that the largest determinants of performance are the physical characteristics of the source and target media, and that physically isolating the two yields the highest indexing throughput.
no code implementations • 22 Oct 2019 • Tommaso Teofili, Jimmy Lin
We demonstrate three approaches for adapting the open-source Lucene search library to perform approximate nearest-neighbor search on arbitrary dense vectors, using similarity search on word embeddings as a case study.
1 code implementation • IJCNLP 2019 • Hsiu-Wei Yang, Yanyan Zou, Peng Shi, Wei Lu, Jimmy Lin, Xu sun
Multilingual knowledge graphs (KGs), such as YAGO and DBpedia, represent entities in different languages.
1 code implementation • ACL 2020 • Hamidreza Shahidi, Ming Li, Jimmy Lin
We consider neural table-to-text generation and neural question generation (NQG) tasks for text generation from structured and unstructured data, respectively.
1 code implementation • NAACL 2019 • Ashutosh Adhikari, Achyudh Ram, Raphael Tang, Jimmy Lin
Neural network models for many NLP tasks have grown increasingly complex in recent years, making training and deployment more difficult.
Ranked #2 on
Document Classification
on IMDb-M
no code implementations • NAACL 2019 • Wei Yang, Luchen Tan, Chunwei Lu, Anqi Cui, Han Li, Xi Chen, Kun Xiong, Muzi Wang, Ming Li, Jian Pei, Jimmy Lin
Consumers dissatisfied with the normal dispute resolution process provided by an e-commerce company{'}s customer service agents have the option of escalating their complaints by filing grievances with a government authority.
1 code implementation • 19 Apr 2019 • Wei Yang, Kuang Lu, Peilin Yang, Jimmy Lin
Is neural IR mostly hype?
no code implementations • 18 Apr 2019 • Jimmy Lin
Motivated by recent commentary that has questioned today's pursuit of ever-more complex models and mathematical formalisms in applied machine learning and whether meaningful empirical progress is actually being made, this paper tries to tackle the decades-old problem of pseudo-relevance feedback with "the simplest thing that can possibly work".
4 code implementations • 17 Apr 2019 • Rodrigo Nogueira, Wei Yang, Jimmy Lin, Kyunghyun Cho
One technique to improve the retrieval effectiveness of a search engine is to expand documents with terms that are related or representative of the documents' content. From the perspective of a question answering system, this might comprise questions the document can potentially answer.
Ranked #1 on
Passage Re-Ranking
on TREC-PM
3 code implementations • 17 Apr 2019 • Ashutosh Adhikari, Achyudh Ram, Raphael Tang, Jimmy Lin
We present, to our knowledge, the first application of BERT to document classification.
Ranked #1 on
Document Classification
on Yelp-14
no code implementations • 14 Apr 2019 • Wei Yang, Yuqing Xie, Luchen Tan, Kun Xiong, Ming Li, Jimmy Lin
Recently, a simple combination of passage retrieval using off-the-shelf IR techniques and a BERT reader was found to be very effective for question answering directly on Wikipedia, yielding a large improvement over the previous state of the art on a standard benchmark dataset.
Ranked #2 on
Open-Domain Question Answering
on SQuAD1.1 dev
3 code implementations • 10 Apr 2019 • Peng Shi, Jimmy Lin
We present simple BERT-based models for relation extraction and semantic role labeling.
Ranked #29 on
Relation Extraction
on TACRED
4 code implementations • 28 Mar 2019 • Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, Jimmy Lin
In the natural language processing literature, neural networks are becoming increasingly deeper and complex.
Ranked #56 on
Sentiment Analysis
on SST-2 Binary classification
2 code implementations • 26 Mar 2019 • Wei Yang, Haotian Zhang, Jimmy Lin
Following recent successes in applying BERT to question answering, we explore simple applications to ad hoc document retrieval.
Ranked #2 on
Ad-Hoc Information Retrieval
on TREC Robust04
(MAP metric)
1 code implementation • 15 Mar 2019 • Michael Azmy, Peng Shi, Jimmy Lin, Ihab F. Ilyas
This paper explores the problem of matching entities across different knowledge graphs.
1 code implementation • NAACL 2019 • Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, Jimmy Lin
We demonstrate an end-to-end question answering system that integrates BERT with the open-source Anserini information retrieval toolkit.
Ranked #3 on
Open-Domain Question Answering
on SQuAD1.1 dev
no code implementations • 19 Dec 2018 • Raphael Tang, Gefei Yang, Hong Wei, Yajie Mao, Ferhan Ture, Jimmy Lin
Voice-enabled commercial products are ubiquitous, typically enabled by lightweight on-device keyword spotting (KWS) and full automatic speech recognition (ASR) in the cloud.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+4
1 code implementation • ACM SIGIR Forum, Volume 52 Issue 2 2018 • Jimmy Lin
Sculley et al. remind us that "the goal of science is not wins, but knowledge".
Ranked #3 on
Ad-Hoc Information Retrieval
on TREC Robust04
(MAP metric)
Ad-Hoc Information Retrieval
Cultural Vocal Bursts Intensity Prediction
+1
no code implementations • NIPS Workshop CDNNRIA 2018 • Raphael Tang, Ashutosh Adhikari, Jimmy Lin
There exists a plethora of techniques for inducing structured sparsity in parametric models during the optimization process, with the final goal of resource-efficient inference.
no code implementations • NAACL 2019 • Peng Shi, Jinfeng Rao, Jimmy Lin
This paper explores the problem of ranking short social media posts with respect to user queries using neural networks.
no code implementations • 2 Nov 2018 • Raphael Tang, Jimmy Lin
In recent years, we have witnessed a dramatic shift towards techniques driven by neural networks for a variety of NLP tasks.
1 code implementation • 30 Oct 2018 • Jaejun Lee, Raphael Tang, Jimmy Lin
Overall, our robust, cross-device implementation for keyword spotting realizes a new paradigm for serving neural network applications, and one of our slim models reduces latency by 66% with a minimal decrease in accuracy of 4% from 94% to 90%.
no code implementations • ICLR 2019 • Raphael Tang, Jimmy Lin
Neural language models (NLMs) exist in an accuracy-efficiency tradeoff space where better perplexity typically comes at the cost of greater computation complexity.
1 code implementation • COLING 2018 • Michael Azmy, Peng Shi, Jimmy Lin, Ihab Ilyas
To address this problem, we present SimpleDBpediaQA, a new benchmark dataset for simple question answering over knowledge graphs that was created by mapping SimpleQuestions entities and predicates from Freebase to DBpedia.
no code implementations • 16 Jul 2018 • Jimmy Lin, Peilin Yang
Due to multi-threaded indexing, which makes experimentation with large modern document collections practical, internal document ids are not assigned consistently between different index instances of the same collection, and thus score ties are broken unpredictably.
no code implementations • NAACL 2018 • Zhucheng Tu, Mengping Li, Jimmy Lin
We demonstrate the serverless deployment of neural networks for model inferencing in NLP applications using Amazon{'}s Lambda service for feedforward evaluation and DynamoDB for storing word embeddings.
no code implementations • NAACL 2018 • Yiyun Liang, Zhucheng Tu, Laetitia Huang, Jimmy Lin
We demonstrate a JavaScript implementation of a convolutional neural network that performs feedforward inference completely in the browser.
3 code implementations • 21 May 2018 • Jinfeng Rao, Wei Yang, Yuhao Zhang, Ferhan Ture, Jimmy Lin
To our best knowledge, this paper presents the first substantial work tackling search over social media posts using neural ranking models.
no code implementations • NAACL 2018 • Salman Mohammed, Peng Shi, Jimmy Lin
We examine the problem of question answering over knowledge graphs, focusing on simple questions that can be answered by the lookup of a single fact.
4 code implementations • 28 Oct 2017 • Raphael Tang, Jimmy Lin
We explore the application of deep residual learning and dilated convolutions to the keyword spotting task, using the recently-released Google Speech Commands Dataset as our benchmark.
4 code implementations • 18 Oct 2017 • Raphael Tang, Jimmy Lin
We describe Honk, an open-source PyTorch reimplementation of convolutional neural networks for keyword spotting that are included as examples in TensorFlow.
no code implementations • EMNLP 2017 • Hua He, Kris Ganjam, Navendu Jain, Jessica Lundin, Ryen White, Jimmy Lin
Mining biomedical text offers an opportunity to automatically discover important facts and infer associations among them.
no code implementations • 25 Jul 2017 • Jinfeng Rao, Hua He, Haotian Zhang, Ferhan Ture, Royal Sequiera, Salman Mohammed, Jimmy Lin
To our knowledge, we are the first to integrate lexical and temporal signals in an end-to-end neural network architecture, in which existing neural ranking models are used to generate query-document similarity vectors that feed into a bidirectional LSTM layer for temporal modeling.
no code implementations • 25 Jul 2017 • Royal Sequiera, Gaurav Baruah, Zhucheng Tu, Salman Mohammed, Jinfeng Rao, Haotian Zhang, Jimmy Lin
Most work on natural language question answering today focuses on answer selection: given a candidate list of sentences, determine which contains the answer.
no code implementations • TACL 2015 • Hua He, Jimmy Lin, Adam Lopez
We believe that GPU-based extraction of hierarchical grammars is an attractive proposition, particularly for MT applications that demand high throughput.
no code implementations • 4 Jun 2014 • Sarah Weissman, Samet Ayhan, Joshua Bradley, Jimmy Lin
Our study identifies sentences in Wikipedia articles that are either identical or highly similar by applying techniques for near-duplicate detection of web pages.
no code implementations • 11 Dec 2012 • Nima Asadi, Jimmy Lin, Arjen P. de Vries
Tree-based models have proven to be an effective solution for web ranking as well as other problems in diverse domains.