1 code implementation • EMNLP 2020 • Logan Lebanoff, Franck Dernoncourt, Doo Soon Kim, Lidan Wang, Walter Chang, Fei Liu
The ability to fuse sentences is highly attractive for summarization systems because it is an essential step to produce succinct abstracts.
no code implementations • 2 Aug 2020 • Lidan Wang, Franck Dernoncourt, Trung Bui
The performance of many machine learning models depends on their hyper-parameter settings.
1 code implementation • ACL 2020 • Logan Lebanoff, John Muchovej, Franck Dernoncourt, Doo Soon Kim, Lidan Wang, Walter Chang, Fei Liu
We create a dataset containing the documents, source and fusion sentences, and human annotations of points of correspondence between sentences.
no code implementations • NAACL 2021 • Jinfeng Xiao, Lidan Wang, Franck Dernoncourt, Trung Bui, Tong Sun, Jiawei Han
Our reader-retriever first uses an offline reader to read the corpus and generate collections of all answerable questions associated with their answers, and then uses an online retriever to respond to user queries by searching the pre-constructed question spaces for answers that are most likely to be asked in the given way.
1 code implementation • 27 Oct 2017 • Lidan Wang, Vishwanath A. Sindagi, Vishal M. Patel
To this end, we propose a novel synthesis framework called Photo-Sketch Synthesis using Multi-Adversarial Networks, (PS2-MAN) that iteratively generates low resolution to high resolution images in an adversarial way.
Ranked #2 on
Face Sketch Synthesis
on CUHK
no code implementations • COLING 2016 • Lidan Wang, Ming Tan, Jiawei Han
In this paper, we propose an extremely efficient hybrid model (FastHybrid) that tackles the problem from both an accuracy and scalability point of view.
4 code implementations • 7 Aug 2015 • Minwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, Bo-Wen Zhou
We apply a general deep learning framework to address the non-factoid question answering task.