Search Results for author: Yichen Jiang

Found 13 papers, 10 papers with code

Inducing Transformer’s Compositional Generalization Ability via Auxiliary Sequence Prediction Tasks

1 code implementation EMNLP 2021 Yichen Jiang, Mohit Bansal

Motivated by the failure of a Transformer model on the SCAN compositionality challenge (Lake and Baroni, 2018), which requires parsing a command into actions, we propose two auxiliary sequence prediction tasks as additional training supervision.

Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings

1 code implementation9 Feb 2024 Yichen Jiang, Xiang Zhou, Mohit Bansal

Transformers generalize to novel compositions of structures and entities after being trained on a complex dataset, but easily overfit on datasets of insufficient complexity.

Machine Translation Quantization +2

Data Factors for Better Compositional Generalization

1 code implementation8 Nov 2023 Xiang Zhou, Yichen Jiang, Mohit Bansal

However, in contrast to this poor performance, state-of-the-art models trained on larger and more general datasets show better generalization ability.

Memorization

Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality

1 code implementation28 Nov 2022 Yichen Jiang, Xiang Zhou, Mohit Bansal

Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models.

Data Augmentation Inductive Bias +1

Learning and Analyzing Generation Order for Undirected Sequence Models

1 code implementation Findings (EMNLP) 2021 Yichen Jiang, Mohit Bansal

On examples with a maximum source and target length of 30 from De-En, WMT'16 English-Romanian, and WMT'21 English-Chinese translation tasks, our learned order outperforms all heuristic generation orders on four out of six tasks.

Machine Translation Translation

Inducing Transformer's Compositional Generalization Ability via Auxiliary Sequence Prediction Tasks

1 code implementation30 Sep 2021 Yichen Jiang, Mohit Bansal

Motivated by the failure of a Transformer model on the SCAN compositionality challenge (Lake and Baroni, 2018), which requires parsing a command into actions, we propose two auxiliary sequence prediction tasks that track the progress of function and argument semantics, as additional training supervision.

Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization

1 code implementation NAACL 2021 Yichen Jiang, Asli Celikyilmaz, Paul Smolensky, Paul Soulos, Sudha Rao, Hamid Palangi, Roland Fernandez, Caitlin Smith, Mohit Bansal, Jianfeng Gao

On several syntactic and semantic probing tasks, we demonstrate the emergent structural information in the role vectors and improved syntactic interpretability in the TPR layer outputs.

Abstractive Text Summarization

Self-Assembling Modular Networks for Interpretable Multi-Hop Reasoning

1 code implementation IJCNLP 2019 Yichen Jiang, Mohit Bansal

Multi-hop QA requires a model to connect multiple pieces of evidence scattered in a long context to answer the question.

Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading Comprehension

1 code implementation ACL 2019 Yichen Jiang, Nitish Joshi, Yen-Chun Chen, Mohit Bansal

Multi-hop reading comprehension requires the model to explore and connect relevant information from multiple sentences/documents in order to answer the question about the context.

Multi-Hop Reading Comprehension Sentence

Closed-Book Training to Improve Summarization Encoder Memory

no code implementations EMNLP 2018 Yichen Jiang, Mohit Bansal

A good neural sequence-to-sequence summarization model should have a strong encoder that can distill and memorize the important information from long input texts so that the decoder can generate salient summaries based on the encoder's memory.

Abstractive Text Summarization Memorization

Cannot find the paper you are looking for? You can Submit a new open access paper.