1 code implementation • NAACL (NLPMC) 2021 • Khalil Mrini, Franck Dernoncourt, Walter Chang, Emilia Farcas, Ndapa Nakashole
Understanding the intent of medical questions asked by patients, or Consumer Health Questions, is an essential skill for medical Conversational AI systems.
no code implementations • Findings (NAACL) 2022 • Adyasha Maharana, Quan Tran, Franck Dernoncourt, Seunghyun Yoon, Trung Bui, Walter Chang, Mohit Bansal
We construct and present a new multimodal dataset consisting of software instructional livestreams and containing manual annotations for both detailed and abstract procedural intent that enable training and evaluation of joint video and text understanding models.
no code implementations • NAACL (BioNLP) 2021 • Khalil Mrini, Franck Dernoncourt, Seunghyun Yoon, Trung Bui, Walter Chang, Emilia Farcas, Ndapa Nakashole
We show that both transfer learning methods combined achieve the highest ROUGE scores.
1 code implementation • COLING 2022 • Khalil Mrini, Harpreet Singh, Franck Dernoncourt, Seunghyun Yoon, Trung Bui, Walter Chang, Emilia Farcas, Ndapa Nakashole
The system first matches the summarized user question with an FAQ from a trusted medical knowledge base, and then retrieves a fixed number of relevant sentences from the corresponding answer document.
2 code implementations • EMNLP 2021 • JianGuo Zhang, Trung Bui, Seunghyun Yoon, Xiang Chen, Zhiwei Liu, Congying Xia, Quan Hung Tran, Walter Chang, Philip Yu
In this work, we focus on a more challenging few-shot intent detection scenario where many intents are fine-grained and semantically similar.
1 code implementation • EMNLP 2021 • Sangwoo Cho, Franck Dernoncourt, Tim Ganter, Trung Bui, Nedim Lipka, Walter Chang, Hailin Jin, Jonathan Brandt, Hassan Foroosh, Fei Liu
With the explosive growth of livestream broadcasting, there is an urgent need for new summarization technology that enables us to create a preview of streamed content and tap into this wealth of knowledge.
1 code implementation • ACL 2021 • Khalil Mrini, Franck Dernoncourt, Seunghyun Yoon, Trung Bui, Walter Chang, Emilia Farcas, Ndapa Nakashole
Users of medical question answering systems often submit long and detailed questions, making it hard to achieve high recall in answer retrieval.
1 code implementation • NAACL 2021 • Tuan Lai, Heng Ji, Trung Bui, Quan Hung Tran, Franck Dernoncourt, Walter Chang
Event coreference resolution is an important research problem with many applications.
1 code implementation • EACL 2021 • Amir Pouran Ben Veyseh, Franck Dernoncourt, Walter Chang, Thien Huu Nguyen
However, none of the existing works provide a unified solution capable of processing acronyms in various domains and to be publicly available.
no code implementations • 22 Dec 2020 • Amir Pouran Ben Veyseh, Franck Dernoncourt, Thien Huu Nguyen, Walter Chang, Leo Anthony Celi
To push forward research in this direction, we have organized two shared task for acronym identification and acronym disambiguation in scientific documents, named AI@SDU and AD@SDU, respectively.
1 code implementation • EMNLP 2020 • Logan Lebanoff, Franck Dernoncourt, Doo Soon Kim, Lidan Wang, Walter Chang, Fei Liu
The ability to fuse sentences is highly attractive for summarization systems because it is an essential step to produce succinct abstracts.
1 code implementation • Asian Chapter of the Association for Computational Linguistics 2020 • Logan Lebanoff, Franck Dernoncourt, Doo Soon Kim, Walter Chang, Fei Liu
We present an empirical study in favor of a cascade architecture to neural text summarization.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Xuanli He, Quan Hung Tran, Gholamreza Haffari, Walter Chang, Trung Bui, Zhe Lin, Franck Dernoncourt, Nhan Dam
In this paper, we explore the novel problem of graph modification, where the systems need to learn how to update an existing scene graph given a new user's command.
1 code implementation • ACL 2020 • Logan Lebanoff, John Muchovej, Franck Dernoncourt, Doo Soon Kim, Lidan Wang, Walter Chang, Fei Liu
We create a dataset containing the documents, source and fusion sentences, and human annotations of points of correspondence between sentences.
no code implementations • 18 May 2020 • Sean MacAvaney, Franck Dernoncourt, Walter Chang, Nazli Goharian, Ophir Frieder
We present an elegant and effective approach for addressing limitations in existing multi-label classification models by incorporating interaction matching, a concept shown to be useful for ad-hoc search result ranking.
1 code implementation • EMNLP 2020 • Kang Min Yoo, Hanbit Lee, Franck Dernoncourt, Trung Bui, Walter Chang, Sang-goo Lee
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models complement the training dataset, benefit NLP tasks.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Khalil Mrini, Franck Dernoncourt, Quan Tran, Trung Bui, Walter Chang, Ndapa Nakashole
Finally, we find that the Label Attention heads learn relations between syntactic categories and show pathways to analyze errors.
Ranked #1 on Dependency Parsing on Penn Treebank
no code implementations • WS 2019 • Logan Lebanoff, John Muchovej, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, Fei Liu
While recent work in abstractive summarization has resulted in higher scores in automatic metrics, there is little understanding on how these systems combine information taken from multiple document sentences.
3 code implementations • ACL 2019 • Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, Fei Liu
There is thus a crucial gap between sentence selection and fusion to support summarizing by both compressing single sentences and fusing pairs.
no code implementations • 18 Apr 2019 • Longqi Yang, Chen Fang, Hailin Jin, Walter Chang, Deborah Estrin
Complex design tasks often require performing diverse actions in a specific order.
no code implementations • 3 Dec 2018 • Jacqueline Brixey, Ramesh Manuvinakurike, Nham Le, Tuan Lai, Walter Chang, Trung Bui
This work presents the task of modifying images in an image editing program using natural language written commands.
no code implementations • COLING 2018 • Sasha Spala, Franck Dernoncourt, Walter Chang, Carl Dockhorn
Automatically highlighting a text aims at identifying key portions that are the most important to a reader.
no code implementations • WS 2018 • Ramesh Manuvinakurike, Trung Bui, Walter Chang, Kallirroi Georgila
We present {``}conversational image editing{''}, a novel real-world application domain combining dialogue, visual information, and the use of computer vision.
2 code implementations • NAACL 2018 • Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, Nazli Goharian
Neural abstractive summarization models have led to promising results in summarizing relatively short documents.
Ranked #4 on Unsupervised Extractive Summarization on Pubmed
no code implementations • CVPR 2017 • Ji Zhang, Mohamed Elhoseiny, Scott Cohen, Walter Chang, Ahmed Elgammal
We demonstrate the ability of our Rel-PN to localize relationships with only a few thousand proposals.
no code implementations • 20 Oct 2016 • Omid Bakhshandeh, Trung Bui, Zhe Lin, Walter Chang
One of the most interesting recent open-ended question answering challenges is Visual Question Answering (VQA) which attempts to evaluate a system's visual understanding through its answers to natural language questions about images.
no code implementations • WS 2016 • Mohamed Elhoseiny, Scott Cohen, Walter Chang, Brian Price, Ahmed Elgammal
Motivated by the application of fact-level image understanding, we present an automatic method for data collection of structured visual facts from images with captions.
no code implementations • 16 Nov 2015 • Mohamed Elhoseiny, Scott Cohen, Walter Chang, Brian Price, Ahmed Elgammal
We show that learning visual facts in a structured way enables not only a uniform but also generalizable visual understanding.