Search Results for author: Jungo Kasai

Found 22 papers, 13 papers with code

Non-autoregressive Translation with Disentangled Context Transformer

1 code implementation ICML 2020 Jungo Kasai, James Cross, Marjan Ghazvininejad, Jiatao Gu

State-of-the-art neural machine translation models generate a translation from left to right and every step is conditioned on the previously generated tokens.

Machine Translation Translation

One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retrieval

1 code implementation NeurIPS 2021 Akari Asai, Xinyan Yu, Jungo Kasai, Hannaneh Hajishirzi

We present Cross-lingual Open-Retrieval Answer Generation (CORA), the first unified many-to-many question answering (QA) model that can answer questions across many languages, even for ones without language-specific annotated data or knowledge sources.

Passage Retrieval Question Answering +1

Finetuning Pretrained Transformers into RNNs

1 code implementation EMNLP 2021 Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, Noah A. Smith

Specifically, we propose a swap-then-finetune procedure: in an off-the-shelf pretrained transformer, we replace the softmax attention with its linear-complexity recurrent alternative and then finetune.

Language Modelling Machine Translation +1

GENIE: A Leaderboard for Human-in-the-Loop Evaluation of Text Generation

no code implementations17 Jan 2021 Daniel Khashabi, Gabriel Stanovsky, Jonathan Bragg, Nicholas Lourie, Jungo Kasai, Yejin Choi, Noah A. Smith, Daniel S. Weld

Leaderboards have eased model development for many NLP datasets by standardizing their evaluation and delegating it to an independent external repository.

Machine Translation Reading Comprehension +2

Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Translation

1 code implementation ICLR 2021 Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, Noah A. Smith

We show that the speed disadvantage for autoregressive baselines compared to non-autoregressive methods has been overestimated in three aspects: suboptimal layer allocation, insufficient speed measurement, and lack of knowledge distillation.

Knowledge Distillation Machine Translation +1

Non-Autoregressive Machine Translation with Disentangled Context Transformer

1 code implementation15 Jan 2020 Jungo Kasai, James Cross, Marjan Ghazvininejad, Jiatao Gu

State-of-the-art neural machine translation models generate a translation from left to right and every step is conditioned on the previously generated tokens.

Machine Translation Translation

Cracking the Contextual Commonsense Code: Understanding Commonsense Reasoning Aptitude of Deep Contextual Representations

no code implementations WS 2019 Jeff Da, Jungo Kasai

Pretrained deep contextual representations have advanced the state-of-the-art on various commonsense NLP tasks, but we lack a concrete understanding of the capability of these models.

Fine-tuning Knowledge Graphs

ScisummNet: A Large Annotated Corpus and Content-Impact Models for Scientific Paper Summarization with Citation Networks

1 code implementation4 Sep 2019 Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexander R. Fabbri, Irene Li, Dan Friedman, Dragomir R. Radev

Scientific article summarization is challenging: large, annotated corpora are not available, and the summary should ideally include the article's impacts on research community.

Scientific Document Summarization

Low-resource Deep Entity Resolution with Transfer and Active Learning

no code implementations ACL 2019 Jungo Kasai, Kun Qian, Sairam Gurajada, Yunyao Li, Lucian Popa

Recent adaptation of deep learning methods for ER mitigates the need for dataset-specific feature engineering by constructing distributed representations of entity records.

Active Learning Entity Resolution +2

Syntax-aware Neural Semantic Role Labeling with Supertags

1 code implementation NAACL 2019 Jungo Kasai, Dan Friedman, Robert Frank, Dragomir Radev, Owen Rambow

We introduce a new syntax-aware model for dependency-based semantic role labeling that outperforms syntax-agnostic models for English and Spanish.

Semantic Role Labeling

Polyglot Contextual Representations Improve Crosslingual Transfer

1 code implementation NAACL 2019 Phoebe Mulcaire, Jungo Kasai, Noah A. Smith

We introduce Rosita, a method to produce multilingual contextual word representations by training a single language model on text from multiple languages.

Dependency Parsing Language Modelling +3

End-to-end Graph-based TAG Parsing with Neural Networks

1 code implementation NAACL 2018 Jungo Kasai, Robert Frank, Pauli Xu, William Merrill, Owen Rambow

We present a graph-based Tree Adjoining Grammar (TAG) parser that uses BiLSTMs, highway connections, and character-level CNNs.

POS

Robust Multilingual Part-of-Speech Tagging via Adversarial Training

1 code implementation NAACL 2018 Michihiro Yasunaga, Jungo Kasai, Dragomir Radev

Adversarial training (AT) is a powerful regularization method for neural networks, aiming to achieve robustness to input perturbations.

Chunking Dependency Parsing +3

TAG Parsing with Neural Networks and Vector Representations of Supertags

no code implementations EMNLP 2017 Jungo Kasai, Bob Frank, Tom McCoy, Owen Rambow, Alexis Nasr

We present supertagging-based models for Tree Adjoining Grammar parsing that use neural network architectures and dense vector representation of supertags (elementary trees) to achieve state-of-the-art performance in unlabeled and labeled attachment scores.

Cannot find the paper you are looking for? You can Submit a new open access paper.