no code implementations • LREC 2022 • Fan Luo, Mihai Surdeanu
Through an evaluation on HotpotQA, a popular dataset for multi-hop QA, we show that our method yields: (a) improved evidence retrieval, (b) improved QA performance when using the retrieved sentences; and (c) effective and faithful explanations when answers are provided.
no code implementations • 5 Nov 2023 • Fan Luo, Mihai Surdeanu
However, semantic equivalence is not the only relevance signal that needs to be considered when retrieving evidences for multi-hop questions.
no code implementations • 4 Nov 2023 • Fan Luo, Mihai Surdeanu
Building a question answering (QA) model with less annotation costs can be achieved by utilizing active learning (AL) training strategy.
no code implementations • 23 Sep 2021 • Fan Luo, Shaoxiang Chen, Jingjing Chen, Zuxuan Wu, Yu-Gang Jiang
Given a text description, Temporal Language Grounding (TLG) aims to localize temporal boundaries of the segments that contain the specified semantics in an untrimmed video.
no code implementations • NAACL (CLPsych) 2021 • Ning Wang, Fan Luo, Yuvraj Shivtare, Varsha D. Badal, K. P. Subbalakshmi, R. Chandramouli, Ellen Lee
We propose a deep learning architecture and test three other machine learning models to automatically detect individuals that will attempt suicide within (1) 30 days and (2) six months, using their social media post data provided in the CLPsych 2021 shared task.
no code implementations • WS 2020 • Ning Wang, Fan Luo, Vishal Peddagangireddy, K. P. Subbalakshmi, R. Chandramouli
In this paper, we show that machine learning-based unsupervised clustering of and anomaly detection with linguistic biomarkers are promising approaches for intuitive visualization and personalized early stage detection of Alzheimer`s disease.
1 code implementation • NAACL 2019 • Rebecca Sharp, Adarsh Pyarelal, Benjamin Gyori, Keith Alcock, Egoitz Laparra, Marco A. Valenzuela-Esc{\'a}rcega, Ajay Nagesh, Vikas Yadav, John Bachman, Zheng Tang, Heather Lent, Fan Luo, Mithun Paul, Steven Bethard, Kobus Barnard, Clayton Morrison, Mihai Surdeanu
Building causal models of complicated phenomena such as food insecurity is currently a slow and labor-intensive manual process.
no code implementations • WS 2019 • Fan Luo, Ajay Nagesh, Rebecca Sharp, Mihai Surdeanu
Generating a large amount of training data for information extraction (IE) is either costly (if annotations are created manually), or runs the risk of introducing noisy instances (if distant supervision is used).
no code implementations • WS 2018 • Fan Luo, Marco A. Valenzuela-Esc{\'a}rcega, Gus Hahn-Powell, Mihai Surdeanu
We introduce a machine learning approach for the identification of {``}white spaces{''} in scientific knowledge.