Search Results for author: Semih Yavuz

Found 38 papers, 14 papers with code

Simple Data Augmentation with the Mask Token Improves Domain Adaptation for Dialog Act Tagging

no code implementations EMNLP 2020 Semih Yavuz, Kazuma Hashimoto, Wenhao Liu, Nitish Shirish Keskar, Richard Socher, Caiming Xiong

The concept of Dialogue Act (DA) is universal across different task-oriented dialogue domains - the act of {``}request{''} carries the same speaker intention whether it is for restaurant reservation or flight booking.

Data Augmentation Domain Generalization

DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain Question Answering over Knowledge Base and Text

no code implementations31 Oct 2023 Wenting Zhao, Ye Liu, Tong Niu, Yao Wan, Philip S. Yu, Shafiq Joty, Yingbo Zhou, Semih Yavuz

Moreover, a significant gap in the current landscape is the absence of a realistic benchmark for evaluating the effectiveness of grounding LLMs on heterogeneous knowledge sources (e. g., knowledge base and text).

Knowledge Graphs Open-Domain Question Answering +2

L2CEval: Evaluating Language-to-Code Generation Capabilities of Large Language Models

no code implementations29 Sep 2023 Ansong Ni, Pengcheng Yin, Yilun Zhao, Martin Riddell, Troy Feng, Rui Shen, Stephen Yin, Ye Liu, Semih Yavuz, Caiming Xiong, Shafiq Joty, Yingbo Zhou, Dragomir Radev, Arman Cohan

Recently, large language models (LLMs), especially those that are pretrained on code, have demonstrated strong capabilities in generating programs from natural language inputs in a few-shot or even zero-shot manner.

Code Generation Math +1

Investigating Answerability of LLMs for Long-Form Question Answering

no code implementations15 Sep 2023 Meghana Moorthy Bhat, Rui Meng, Ye Liu, Yingbo Zhou, Semih Yavuz

As we embark on a new era of LLMs, it becomes increasingly crucial to understand their capabilities, limitations, and differences.

Long Form Question Answering Question Generation +1

XGen-7B Technical Report

1 code implementation7 Sep 2023 Erik Nijkamp, Tian Xie, Hiroaki Hayashi, Bo Pang, Congying Xia, Chen Xing, Jesse Vig, Semih Yavuz, Philippe Laban, Ben Krause, Senthil Purushwalkam, Tong Niu, Wojciech Kryściński, Lidiya Murakhovs'ka, Prafulla Kumar Choubey, Alex Fabbri, Ye Liu, Rui Meng, Lifu Tu, Meghana Bhat, Chien-Sheng Wu, Silvio Savarese, Yingbo Zhou, Shafiq Joty, Caiming Xiong

Most open-source LLMs, on the other hand, are limited in their ability to support longer sequence lengths, which is a key requirement for many tasks that require inference over an input context.

Exploring the Integration Strategies of Retriever and Large Language Models

no code implementations24 Aug 2023 Ye Liu, Semih Yavuz, Rui Meng, Meghana Moorthy, Shafiq Joty, Caiming Xiong, Yingbo Zhou

This paper aims to fill this gap by investigating different methods of combining retrieved passages with LLMs to enhance answer generation.

Answer Generation Open-Domain Question Answering

Few-shot Unified Question Answering: Tuning Models or Prompts?

no code implementations23 May 2023 Srijan Bansal, Semih Yavuz, Bo Pang, Meghana Bhat, Yingbo Zhou

Question-answering (QA) tasks often investigate specific question types, knowledge domains, or reasoning skills, leading to specialized models catering to specific categories of QA tasks.

Question Answering Transfer Learning

HPE:Answering Complex Questions over Text by Hybrid Question Parsing and Execution

no code implementations12 May 2023 Ye Liu, Semih Yavuz, Rui Meng, Dragomir Radev, Caiming Xiong, Yingbo Zhou

It comprises two central pillars: (1) We parse the question of varying complexity into an intermediate representation, named H-expression, which is composed of simple questions as the primitives and symbolic operations representing the relationships among them; (2) To execute the resulting H-expressions, we design a hybrid executor, which integrates the deterministic rules to translate the symbolic operations with a drop-in neural reader network to answer each decomposed simple question.

Knowledge Graphs Question Answering +1

Efficiently Aligned Cross-Lingual Transfer Learning for Conversational Tasks using Prompt-Tuning

1 code implementation3 Apr 2023 Lifu Tu, Jin Qu, Semih Yavuz, Shafiq Joty, Wenhao Liu, Caiming Xiong, Yingbo Zhou

Our results demonstrate the strong and efficient modeling ability of NLI-based classifiers and the large cross-lingual transfer improvements achieved by our aligned prompts, particularly in few-shot settings.

Cross-Lingual Transfer intent-classification +4

AugTriever: Unsupervised Dense Retrieval by Scalable Data Augmentation

no code implementations17 Dec 2022 Rui Meng, Ye Liu, Semih Yavuz, Divyansh Agarwal, Lifu Tu, Ning Yu, JianGuo Zhang, Meghana Bhat, Yingbo Zhou

Dense retrievers have made significant strides in text retrieval and open-domain question answering, even though most achievements were made possible only with large amounts of human supervision.

Data Augmentation Open-Domain Question Answering +2

Uni-Parser: Unified Semantic Parser for Question Answering on Knowledge Base and Database

no code implementations9 Nov 2022 Ye Liu, Semih Yavuz, Rui Meng, Dragomir Radev, Caiming Xiong, Yingbo Zhou

Parsing natural language questions into executable logical forms is a useful and interpretable way to perform question answering on structured data such as knowledge bases (KB) or databases (DB).

Question Answering Semantic Parsing

Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors

1 code implementation25 May 2022 Liyan Tang, Tanya Goyal, Alexander R. Fabbri, Philippe Laban, Jiacheng Xu, Semih Yavuz, Wojciech Kryściński, Justin F. Rousseau, Greg Durrett

We compare performance of state-of-the-art factuality metrics, including recent ChatGPT-based metrics, on this stratified benchmark and show that their performance varies significantly across different types of summarization models.

Abstractive Text Summarization

Modeling Multi-hop Question Answering as Single Sequence Prediction

no code implementations ACL 2022 Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou, Nitish Shirish Keskar, Caiming Xiong

Fusion-in-decoder (Fid) (Izacard and Grave, 2020) is a generative question answering (QA) model that leverages passage retrieval with a pre-trained transformer and pushed the state of the art on single-hop QA.

Answer Generation Generative Question Answering +3

Choose Your QA Model Wisely: A Systematic Study of Generative and Extractive Readers for Question Answering

no code implementations SpaNLP (ACL) 2022 Man Luo, Kazuma Hashimoto, Semih Yavuz, Zhiwei Liu, Chitta Baral, Yingbo Zhou

Among several interesting findings, it is important to highlight that (1) the generative readers perform better in long context QA, (2) the extractive readers perform better in short context while also showing better out-of-domain generalization, and (3) the encoder of encoder-decoder PrLMs (e. g., T5) turns out to be a strong extractive reader and outperforms the standard choice of encoder-only PrLMs (e. g., RoBERTa).

Domain Generalization Multi-Task Learning +1

Dense Hierarchical Retrieval for Open-Domain Question Answering

1 code implementation Findings (EMNLP) 2021 Ye Liu, Kazuma Hashimoto, Yingbo Zhou, Semih Yavuz, Caiming Xiong, Philip S. Yu

In this work, we propose Dense Hierarchical Retrieval (DHR), a hierarchical framework that can generate accurate dense representations of passages by utilizing both macroscopic semantics in the document and microscopic semantics specific to each passage.

Open-Domain Question Answering Retrieval +1

RnG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering

1 code implementation ACL 2022 Xi Ye, Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou, Caiming Xiong

We present RnG-KBQA, a Rank-and-Generate approach for KBQA, which remedies the coverage issue with a generation model while preserving a strong generalization capability.

Entity Linking Knowledge Base Question Answering +1

Task-adaptive Pre-training and Self-training are Complementary for Natural Language Understanding

no code implementations Findings (EMNLP) 2021 Shiyang Li, Semih Yavuz, Wenhu Chen, Xifeng Yan

Task-adaptive pre-training (TAPT) and Self-training (ST) have emerged as the major semi-supervised approaches to improve natural language understanding (NLU) tasks with massive amount of unlabeled data.

named-entity-recognition Named Entity Recognition +6

Stage-wise Fine-tuning for Graph-to-Text Generation

1 code implementation ACL 2021 Qingyun Wang, Semih Yavuz, Victoria Lin, Heng Ji, Nazneen Rajani

Graph-to-text generation has benefited from pre-trained language models (PLMs) in achieving better performance than structured graph encoders.

Ranked #3 on Data-to-Text Generation on WebNLG (using extra training data)

Data-to-Text Generation KB-to-Language Generation +2

Unsupervised Paraphrasing with Pretrained Language Models

no code implementations EMNLP 2021 Tong Niu, Semih Yavuz, Yingbo Zhou, Nitish Shirish Keskar, Huan Wang, Caiming Xiong

To enforce a surface form dissimilar from the input, whenever the language model emits a token contained in the source sequence, DB prevents the model from outputting the subsequent source token for the next generation step.

Blocking Language Modelling +3

CoCo: Controllable Counterfactuals for Evaluating Dialogue State Trackers

2 code implementations ICLR 2021 Shiyang Li, Semih Yavuz, Kazuma Hashimoto, Jia Li, Tong Niu, Nazneen Rajani, Xifeng Yan, Yingbo Zhou, Caiming Xiong

Dialogue state trackers have made significant progress on benchmark datasets, but their generalization capability to novel and realistic scenarios beyond the held-out conversations is less understood.

Ranked #2 on Multi-domain Dialogue State Tracking on MULTIWOZ 2.1 (using extra training data)

counterfactual Dialogue State Tracking +1

Neural Assistant: Joint Action Prediction, Response Generation, and Latent Knowledge Reasoning

1 code implementation31 Oct 2019 Arvind Neelakantan, Semih Yavuz, Sharan Narang, Vishaal Prasad, Ben Goodrich, Daniel Duckworth, Chinnadhurai Sankar, Xifeng Yan

In this paper, we develop Neural Assistant: a single neural network model that takes conversation history and an external knowledge source as input and jointly produces both text response and action to be taken by the system as output.

Response Generation Retrieval +1

Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset

1 code implementation IJCNLP 2019 Bill Byrne, Karthik Krishnamoorthi, Chinnadhurai Sankar, Arvind Neelakantan, Daniel Duckworth, Semih Yavuz, Ben Goodrich, Amit Dubey, Andy Cedilnik, Kyu-Young Kim

A significant barrier to progress in data-driven approaches to building dialog systems is the lack of high quality, goal-oriented conversational data.

DeepCopy: Grounded Response Generation with Hierarchical Pointer Networks

no code implementations WS 2019 Semih Yavuz, Abhinav Rastogi, Guan-Lin Chao, Dilek Hakkani-Tur

Recent advances in neural sequence-to-sequence models have led to promising results for several language generation-based tasks, including dialogue response generation, summarization, and machine translation.

Machine Translation Response Generation +2

Learning Question-Guided Video Representation for Multi-Turn Video Question Answering

no code implementations WS 2019 Guan-Lin Chao, Abhinav Rastogi, Semih Yavuz, Dilek Hakkani-Tür, Jindong Chen, Ian Lane

Understanding and conversing about dynamic scenes is one of the key capabilities of AI agents that navigate the environment and convey useful information to humans.

Navigate Question Answering +2

Monotonic Infinite Lookback Attention for Simultaneous Machine Translation

no code implementations ACL 2019 Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, Colin Raffel

Simultaneous machine translation begins to translate each source sentence before the source speaker is finished speaking, with applications to live and streaming scenarios.

Machine Translation NMT +2

Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling

2 code implementations21 Feb 2019 Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X. Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, Yanzhang He, Jan Chorowski, Smit Hinsu, Stella Laurenzo, James Qin, Orhan Firat, Wolfgang Macherey, Suyog Gupta, Ankur Bapna, Shuyuan Zhang, Ruoming Pang, Ron J. Weiss, Rohit Prabhavalkar, Qiao Liang, Benoit Jacob, Bowen Liang, HyoukJoong Lee, Ciprian Chelba, Sébastien Jean, Bo Li, Melvin Johnson, Rohan Anil, Rajat Tibrewal, Xiaobing Liu, Akiko Eriguchi, Navdeep Jaitly, Naveen Ari, Colin Cherry, Parisa Haghani, Otavio Good, Youlong Cheng, Raziel Alvarez, Isaac Caswell, Wei-Ning Hsu, Zongheng Yang, Kuan-Chieh Wang, Ekaterina Gonina, Katrin Tomanek, Ben Vanik, Zelin Wu, Llion Jones, Mike Schuster, Yanping Huang, Dehao Chen, Kazuki Irie, George Foster, John Richardson, Klaus Macherey, Antoine Bruguier, Heiga Zen, Colin Raffel, Shankar Kumar, Kanishka Rao, David Rybach, Matthew Murray, Vijayaditya Peddinti, Maxim Krikun, Michiel A. U. Bacchiani, Thomas B. Jablin, Rob Suderman, Ian Williams, Benjamin Lee, Deepti Bhatia, Justin Carlson, Semih Yavuz, Yu Zhang, Ian McGraw, Max Galkin, Qi Ge, Golan Pundak, Chad Whipkey, Todd Wang, Uri Alon, Dmitry Lepikhin, Ye Tian, Sara Sabour, William Chan, Shubham Toshniwal, Baohua Liao, Michael Nirschl, Pat Rondon

Lingvo is a Tensorflow framework offering a complete solution for collaborative deep learning research, with a particular focus towards sequence-to-sequence models.

Sequence-To-Sequence Speech Recognition

CaLcs: Continuously Approximating Longest Common Subsequence for Sequence Level Optimization

no code implementations EMNLP 2018 Semih Yavuz, Chung-Cheng Chiu, Patrick Nguyen, Yonghui Wu

Maximum-likelihood estimation (MLE) is one of the most widely used approaches for training structured prediction models for text-generation based natural language processing applications.

Abstractive Text Summarization Image Captioning +5

What It Takes to Achieve 100\% Condition Accuracy on WikiSQL

no code implementations EMNLP 2018 Semih Yavuz, Izzeddin Gur, Yu Su, Xifeng Yan

The SQL queries in WikiSQL are simple: Each involves one relation and does not have any join operation.

Translation

DialSQL: Dialogue Based Structured Query Generation

no code implementations ACL 2018 Izzeddin Gur, Semih Yavuz, Yu Su, Xifeng Yan

The recent advance in deep learning and semantic parsing has significantly improved the translation accuracy of natural language questions to structured queries.

Semantic Parsing Translation

Recovering Question Answering Errors via Query Revision

no code implementations EMNLP 2017 Semih Yavuz, Izzeddin Gur, Yu Su, Xifeng Yan

The existing factoid QA systems often lack a post-inspection component that can help models recover from their own mistakes.

Question Answering Semantic Parsing

Cannot find the paper you are looking for? You can Submit a new open access paper.