Search Results for author: Fan Luo

Found 7 papers, 1 papers with code

A STEP towards Interpretable Multi-Hop Reasoning:Bridge Phrase Identification and Query Expansion

no code implementations LREC 2022 Fan Luo, Mihai Surdeanu

Through an evaluation on HotpotQA, a popular dataset for multi-hop QA, we show that our method yields: (a) improved evidence retrieval, (b) improved QA performance when using the retrieved sentences; and (c) effective and faithful explanations when answers are provided.

Multi-hop Question Answering Question Answering +1

Self-supervised Learning for Semi-supervised Temporal Language Grounding

no code implementations23 Sep 2021 Fan Luo, Shaoxiang Chen, Jingjing Chen, Zuxuan Wu, Yu-Gang Jiang

Given a text description, Temporal Language Grounding (TLG) aims to localize temporal boundaries of the segments that contain the specified semantics in an untrimmed video.

Contrastive Learning Pseudo Label +1

Learning Models for Suicide Prediction from Social Media Posts

no code implementations NAACL (CLPsych) 2021 Ning Wang, Fan Luo, Yuvraj Shivtare, Varsha D. Badal, K. P. Subbalakshmi, R. Chandramouli, Ellen Lee

We propose a deep learning architecture and test three other machine learning models to automatically detect individuals that will attempt suicide within (1) 30 days and (2) six months, using their social media post data provided in the CLPsych 2021 shared task.

BIG-bench Machine Learning

Personalized Early Stage Alzheimer's Disease Detection: A Case Study of President Reagan's Speeches

no code implementations WS 2020 Ning Wang, Fan Luo, Vishal Peddagangireddy, K. P. Subbalakshmi, R. Chandramouli

In this paper, we show that machine learning-based unsupervised clustering of and anomaly detection with linguistic biomarkers are promising approaches for intuitive visualization and personalized early stage detection of Alzheimer`s disease.

alzheimer's disease detection Anomaly Detection

Semi-Supervised Teacher-Student Architecture for Relation Extraction

no code implementations WS 2019 Fan Luo, Ajay Nagesh, Rebecca Sharp, Mihai Surdeanu

Generating a large amount of training data for information extraction (IE) is either costly (if annotations are created manually), or runs the risk of introducing noisy instances (if distant supervision is used).

Binary Relation Extraction Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.