Search Results for author: Kazuma Hashimoto

Found 43 papers, 16 papers with code

Few-Shot Intent Classification by Gauging Entailment Relationship Between Utterance and Semantic Label

no code implementations EMNLP (NLP4ConvAI) 2021 Jin Qu, Kazuma Hashimoto, Wenhao Liu, Caiming Xiong, Yingbo Zhou

Compared with DNNC, our proposed method is more efficient in both training and serving since it is based upon the entailment between query utterance and labels instead of all the training examples.

Classification intent-classification +2

[CASPI] Causal-aware Safe Policy Improvement for Task-oriented Dialogue

no code implementations ACL 2022 Govardana Sachithanandam Ramachandran, Kazuma Hashimoto, Caiming Xiong

Further more we demonstrate sample efficiency, where our method trained only on 20% of the data, are comparable to current state of the art method trained on 100% data on two out of there evaluation metrics.

Dialogue Management Management +1

Simple Data Augmentation with the Mask Token Improves Domain Adaptation for Dialog Act Tagging

no code implementations EMNLP 2020 Semih Yavuz, Kazuma Hashimoto, Wenhao Liu, Nitish Shirish Keskar, Richard Socher, Caiming Xiong

The concept of Dialogue Act (DA) is universal across different task-oriented dialogue domains - the act of {``}request{''} carries the same speaker intention whether it is for restaurant reservation or flight booking.

Data Augmentation Domain Generalization

Take One Step at a Time to Know Incremental Utility of Demonstration: An Analysis on Reranking for Few-Shot In-Context Learning

no code implementations16 Nov 2023 Kazuma Hashimoto, Karthik Raman, Michael Bendersky

Unlike the previous work, we introduce a novel labeling method, incremental utility, which estimates how much incremental knowledge is brought into the LLMs by a demonstration.

In-Context Learning Multi-class Classification +1

Ambiguity-Aware In-Context Learning with Large Language Models

no code implementations14 Sep 2023 Lingyu Gao, Aditi Chaudhary, Krishna Srinivasan, Kazuma Hashimoto, Karthik Raman, Michael Bendersky

In-context learning (ICL) i. e. showing LLMs only a few task-specific demonstrations has led to downstream gains with no task-specific fine-tuning required.

In-Context Learning Semantic Similarity +3

Exploring the Viability of Synthetic Query Generation for Relevance Prediction

no code implementations19 May 2023 Aditi Chaudhary, Karthik Raman, Krishna Srinivasan, Kazuma Hashimoto, Mike Bendersky, Marc Najork

While our experiments demonstrate that these modifications help improve performance of QGen techniques, we also find that QGen approaches struggle to capture the full nuance of the relevance label space and as a result the generated queries are not faithful to the desired relevance label.

Information Retrieval Question Answering +2

GROOT: Corrective Reward Optimization for Generative Sequential Labeling

no code implementations29 Sep 2022 Kazuma Hashimoto, Karthik Raman

GROOT works by training a generative sequential labeling model to match the decoder output distribution with that of the (black-box) reward function.

Decoder

Modeling Multi-hop Question Answering as Single Sequence Prediction

no code implementations ACL 2022 Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou, Nitish Shirish Keskar, Caiming Xiong

Fusion-in-decoder (Fid) (Izacard and Grave, 2020) is a generative question answering (QA) model that leverages passage retrieval with a pre-trained transformer and pushed the state of the art on single-hop QA.

Answer Generation Decoder +4

OneAligner: Zero-shot Cross-lingual Transfer with One Rich-Resource Language Pair for Low-Resource Sentence Retrieval

no code implementations Findings (ACL) 2022 Tong Niu, Kazuma Hashimoto, Yingbo Zhou, Caiming Xiong

When finetuned on a single rich-resource language pair, be it English-centered or not, our model is able to match the performance of the ones finetuned on all language pairs under the same data budget with less than 2. 0 points decrease in accuracy.

Machine Translation Retrieval +3

Transforming Sequence Tagging Into A Seq2Seq Task

no code implementations16 Mar 2022 Karthik Raman, Iftekhar Naim, Jiecao Chen, Kazuma Hashimoto, Kiran Yalasangi, Krishna Srinivasan

Pretrained, large, generative language models (LMs) have had great success in a wide range of sequence tagging and structured prediction tasks.

Hallucination Structured Prediction +1

Choose Your QA Model Wisely: A Systematic Study of Generative and Extractive Readers for Question Answering

no code implementations SpaNLP (ACL) 2022 Man Luo, Kazuma Hashimoto, Semih Yavuz, Zhiwei Liu, Chitta Baral, Yingbo Zhou

Among several interesting findings, it is important to highlight that (1) the generative readers perform better in long context QA, (2) the extractive readers perform better in short context while also showing better out-of-domain generalization, and (3) the encoder of encoder-decoder PrLMs (e. g., T5) turns out to be a strong extractive reader and outperforms the standard choice of encoder-only PrLMs (e. g., RoBERTa).

Decoder Domain Generalization +2

Dense Hierarchical Retrieval for Open-Domain Question Answering

1 code implementation Findings (EMNLP) 2021 Ye Liu, Kazuma Hashimoto, Yingbo Zhou, Semih Yavuz, Caiming Xiong, Philip S. Yu

In this work, we propose Dense Hierarchical Retrieval (DHR), a hierarchical framework that can generate accurate dense representations of passages by utilizing both macroscopic semantics in the document and microscopic semantics specific to each passage.

Open-Domain Question Answering Text Retrieval

RnG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering

1 code implementation ACL 2022 Xi Ye, Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou, Caiming Xiong

We present RnG-KBQA, a Rank-and-Generate approach for KBQA, which remedies the coverage issue with a generation model while preserving a strong generalization capability.

Entity Linking Knowledge Base Question Answering +1

Causal-aware Safe Policy Improvement for Task-oriented dialogue

1 code implementation10 Mar 2021 Govardana Sachithanandam Ramachandran, Kazuma Hashimoto, Caiming Xiong

This method gives guarantees on dialogue policy's performance and also learns to shape rewards according to intentions behind human responses, rather than just mimicking demonstration data; this couple with batch-RL helps overall with sample efficiency of the framework.

Dialogue Management Management +1

Neural Text Generation with Artificial Negative Examples

no code implementations28 Dec 2020 Keisuke Shirai, Kazuma Hashimoto, Akiko Eriguchi, Takashi Ninomiya, Shinsuke Mori

In this paper, we propose to suppress an arbitrary type of errors by training the text generation model in a reinforcement learning framework, where we use a trainable reward function that is capable of discriminating between references and sentences containing the targeted type of errors.

Image Captioning Machine Translation +2

CoCo: Controllable Counterfactuals for Evaluating Dialogue State Trackers

2 code implementations ICLR 2021 Shiyang Li, Semih Yavuz, Kazuma Hashimoto, Jia Li, Tong Niu, Nazneen Rajani, Xifeng Yan, Yingbo Zhou, Caiming Xiong

Dialogue state trackers have made significant progress on benchmark datasets, but their generalization capability to novel and realistic scenarios beyond the held-out conversations is less understood.

Ranked #2 on Multi-domain Dialogue State Tracking on MULTIWOZ 2.1 (using extra training data)

counterfactual Dialogue State Tracking +1

A High-Quality Multilingual Dataset for Structured Documentation Translation

1 code implementation WS 2019 Kazuma Hashimoto, Raffaella Buschiazzo, James Bradbury, Teresa Marshall, Richard Socher, Caiming Xiong

We build and evaluate translation models for seven target languages from English, with several different copy mechanisms and an XML-constrained beam search.

Translation Vocal Bursts Intensity Prediction

CO-Search: COVID-19 Information Retrieval with Semantic Search, Question Answering, and Abstractive Summarization

no code implementations17 Jun 2020 Andre Esteva, Anuprit Kale, Romain Paulus, Kazuma Hashimoto, Wenpeng Yin, Dragomir Radev, Richard Socher

The COVID-19 global pandemic has resulted in international efforts to understand, track, and mitigate the disease, yielding a significant corpus of COVID-19 and SARS-CoV-2-related publications across scientific disciplines.

Abstractive Text Summarization Information Retrieval +3

Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT

no code implementations27 Feb 2020 Lichao Sun, Kazuma Hashimoto, Wenpeng Yin, Akari Asai, Jia Li, Philip Yu, Caiming Xiong

There is an increasing amount of literature that claims the brittleness of deep neural networks in dealing with adversarial examples that are created maliciously.

Question Answering Sentence +1

Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering

2 code implementations ICLR 2020 Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, Caiming Xiong

Answering questions that require multi-hop reasoning at web-scale necessitates retrieving multiple evidence documents, one of which often has little lexical or semantic relationship to the question.

Question Answering Retrieval

Multilingual Extractive Reading Comprehension by Runtime Machine Translation

1 code implementation10 Sep 2018 Akari Asai, Akiko Eriguchi, Kazuma Hashimoto, Yoshimasa Tsuruoka

Given a target language without RC training data and a pivot language with RC training data (e. g. English), our method leverages existing RC resources in the pivot language by combining a competitive RC model in the pivot language with an attentive Neural Machine Translation (NMT) model.

Machine Translation NMT +2

Accelerated Reinforcement Learning for Sentence Generation by Vocabulary Prediction

1 code implementation NAACL 2019 Kazuma Hashimoto, Yoshimasa Tsuruoka

A major obstacle in reinforcement learning-based sentence generation is the large action space whose size is equal to the vocabulary size of the target-side language.

Image Captioning Machine Translation +5

Neural Machine Translation with Source-Side Latent Graph Parsing

no code implementations EMNLP 2017 Kazuma Hashimoto, Yoshimasa Tsuruoka

This paper presents a novel neural machine translation model which jointly learns translation and source-side latent graph representations of sentences.

Machine Translation NMT +1

Character-based Decoding in Tree-to-Sequence Attention-based Neural Machine Translation

no code implementations WS 2016 Akiko Eriguchi, Kazuma Hashimoto, Yoshimasa Tsuruoka

This paper reports our systems (UT-AKY) submitted in the 3rd Workshop of Asian Translation 2016 (WAT{'}16) and their results in the English-to-Japanese translation task.

Decoder Machine Translation +2

Domain Adaptation for Neural Networks by Parameter Augmentation

no code implementations WS 2016 Yusuke Watanabe, Kazuma Hashimoto, Yoshimasa Tsuruoka

Recently, recurrent neural networks have been shown to be successful on a variety of NLP tasks such as caption generation; however, the existing domain adaptation techniques are limited to (1) tune the model parameters by the target dataset after the training by the source dataset, or (2) design the network to have dual output, one for the source domain and the other for the target domain.

Caption Generation Domain Adaptation

Tree-to-Sequence Attentional Neural Machine Translation

1 code implementation ACL 2016 Akiko Eriguchi, Kazuma Hashimoto, Yoshimasa Tsuruoka

Most of the existing Neural Machine Translation (NMT) models focus on the conversion of sequential data and do not directly use syntactic information.

Decoder Machine Translation +3

Adaptive Joint Learning of Compositional and Non-Compositional Phrase Embeddings

no code implementations ACL 2016 Kazuma Hashimoto, Yoshimasa Tsuruoka

We present a novel method for jointly learning compositional and non-compositional phrase embeddings by adaptively weighting both types of embeddings using a compositionality scoring function.

Cannot find the paper you are looking for? You can Submit a new open access paper.