Search Results for author: Ximing Lu

Found 13 papers, 7 papers with code

Twist Decoding: Diverse Generators Guide Each Other

1 code implementation19 May 2022 Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Hao Peng, Ximing Lu, Dragomir Radev, Yejin Choi, Noah A. Smith

Natural language generation technology has recently seen remarkable progress with large-scale training, and many natural language applications are now built upon a wide range of generation models.

Machine Translation Text Generation

NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics

no code implementations16 Dec 2021 Ximing Lu, Sean Welleck, Peter West, Liwei Jiang, Jungo Kasai, Daniel Khashabi, Ronan Le Bras, Lianhui Qin, Youngjae Yu, Rowan Zellers, Noah A. Smith, Yejin Choi

To enable constrained generation, we build on NeuroLogic decoding (Lu et al., 2021), combining its flexibility in incorporating logical constraints with A*esque estimates of future constraint satisfaction.

Machine Translation Table-to-Text Generation

Connecting the Dots between Audio and Text without Parallel Data through Visual Knowledge Transfer

1 code implementation16 Dec 2021 Yanpeng Zhao, Jack Hessel, Youngjae Yu, Ximing Lu, Rowan Zellers, Yejin Choi

In a difficult zero-shot setting with no paired audio-text data, our model demonstrates state-of-the-art zero-shot performance on the ESC50 and US8K audio classification tasks, and even surpasses the supervised state of the art for Clotho caption retrieval (with audio queries) by 2. 2\% R@1.

Audio Classification Audio Tagging +1

Generated Knowledge Prompting for Commonsense Reasoning

1 code implementation ACL 2022 Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, Hannaneh Hajishirzi

It remains an open question whether incorporating external knowledge benefits commonsense reasoning while maintaining the flexibility of pretrained sequence models.

Language Modelling

MERLOT: Multimodal Neural Script Knowledge Models

1 code implementation NeurIPS 2021 Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, Yejin Choi

As humans, we understand events in the visual world contextually, performing multimodal reasoning across time to make inferences about the past, present, and future.

Visual Commonsense Reasoning

On-the-Fly Attention Modulation for Neural Generation

no code implementations Findings (ACL) 2021 Yue Dong, Chandra Bhagavatula, Ximing Lu, Jena D. Hwang, Antoine Bosselut, Jackie Chi Kit Cheung, Yejin Choi

Despite considerable advancements with deep neural language models (LMs), neural text generation still suffers from degeneration: the generated text is repetitive, generic, self-contradictory, and often lacks commonsense.

Language Modelling Text Generation

Analyzing Commonsense Emergence in Few-shot Knowledge Models

1 code implementation AKBC 2021 Jeff Da, Ronan Le Bras, Ximing Lu, Yejin Choi, Antoine Bosselut

Our results show that commonsense knowledge models can rapidly adapt from limited examples, indicating that KG fine-tuning serves to learn an interface to encoded knowledge learned during pretraining.

Pretrained Language Models

NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints

no code implementations NAACL 2021 Ximing Lu, Peter West, Rowan Zellers, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

While the dominant recipe for conditional text generation has been large-scale pretrained language models that are finetuned on the task-specific training data, such models do not learn to follow the underlying constraints reliably, even when supervised with large amounts of task-specific examples.

Conditional Text Generation Pretrained Language Models

Reflective Decoding: Beyond Unidirectional Generation with Off-the-Shelf Language Models

no code implementations ACL 2021 Peter West, Ximing Lu, Ari Holtzman, Chandra Bhagavatula, Jena Hwang, Yejin Choi

In this paper, we present Reflective Decoding, a novel unsupervised algorithm that allows for direct application of unidirectional LMs to non-sequential tasks.

Conditional Text Generation Text Infilling

HATNet: An End-to-End Holistic Attention Network for Diagnosis of Breast Biopsy Images

1 code implementation25 Jul 2020 Sachin Mehta, Ximing Lu, Donald Weaver, Joann G. Elmore, Hannaneh Hajishirzi, Linda Shapiro

HATNet extends the bag-of-words approach and uses self-attention to encode global information, allowing it to learn representations from clinically relevant tissue structures without any explicit supervision.

Histopathological Image Classification Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.