Search Results for author: Kazoo Sone

Found 7 papers, 3 papers with code

Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection

no code implementations NAACL 2021 Sihao Chen, Fan Zhang, Kazoo Sone, Dan Roth

Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context.

Abstractive Text Summarization Hallucination

Towards Understanding Sample Variance in Visually Grounded Language Generation: Evaluations and Observations

no code implementations EMNLP 2020 Wanrong Zhu, Xin Eric Wang, Pradyumna Narayana, Kazoo Sone, Sugato Basu, William Yang Wang

A major challenge in visually grounded language generation is to build robust benchmark datasets and models that can generalize well in real-world settings.

Text Generation

Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation

1 code implementation EACL 2021 Wanrong Zhu, Xin Eric Wang, Tsu-Jui Fu, An Yan, Pradyumna Narayana, Kazoo Sone, Sugato Basu, William Yang Wang

Outdoor vision-and-language navigation (VLN) is such a task where an agent follows natural language instructions and navigates a real-life urban environment.

Ranked #4 on Vision and Language Navigation on Touchdown Dataset (using extra training data)

Style Transfer Text Style Transfer +1

Multi-Image Summarization: Textual Summary from a Set of Cohesive Images

no code implementations15 Jun 2020 Nicholas Trieu, Sebastian Goodman, Pradyumna Narayana, Kazoo Sone, Radu Soricut

Multi-sentence summarization is a well studied problem in NLP, while generating image descriptions for a single image is a well studied problem in Computer Vision.

Descriptive Image Captioning +2

HUSE: Hierarchical Universal Semantic Embeddings

8 code implementations14 Nov 2019 Pradyumna Narayana, Aniket Pednekar, Abishek Krishnamoorthy, Kazoo Sone, Sugato Basu

The works in the domain of visual semantic embeddings address this problem by first constructing a semantic embedding space based on some external knowledge and projecting image embeddings onto this fixed semantic embedding space.

General Classification Representation Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.