Search Results for author: Ruhi Sarikaya

Found 29 papers, 0 papers with code

Learning Slice-Aware Representations with Mixture of Attentions

no code implementations Findings (ACL) 2021 Cheng Wang, Sungjin Lee, Sunghyun Park, Han Li, Young-Bum Kim, Ruhi Sarikaya

Real-world machine learning systems are achieving remarkable performance in terms of coarse-grained metrics like overall accuracy and F-1 score.

Natural Language Understanding

Handling Long-Tail Queries with Slice-Aware Conversational Systems

no code implementations26 Apr 2021 Cheng Wang, Sun Kim, Taiwoo Park, Sajal Choudhary, Sunghyun Park, Young-Bum Kim, Ruhi Sarikaya, Sungjin Lee

We have been witnessing the usefulness of conversational AI systems such as Siri and Alexa, directly impacting our daily lives.

Neural model robustness for skill routing in large-scale conversational AI systems: A design choice exploration

no code implementations4 Mar 2021 Han Li, Sunghyun Park, Aswarth Dara, Jinseok Nam, Sungjin Lee, Young-Bum Kim, Spyros Matsoukas, Ruhi Sarikaya

Ensuring model robustness or resilience in the skill routing component is an important problem since skills may dynamically change their subscription in the ontology after the skill routing model has been deployed to production.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

A scalable framework for learning from implicit user feedback to improve natural language understanding in large-scale conversational AI systems

no code implementations EMNLP 2021 Sunghyun Park, Han Li, Ameen Patel, Sidharth Mudgal, Sungjin Lee, Young-Bum Kim, Spyros Matsoukas, Ruhi Sarikaya

Natural Language Understanding (NLU) is an established component within a conversational AI or digital assistant system, and it is responsible for producing semantic understanding of a user request.

Natural Language Understanding

Feedback-Based Self-Learning in Large-Scale Conversational AI Agents

no code implementations6 Nov 2019 Pragaash Ponnusamy, Alireza Roshan Ghias, Chenlei Guo, Ruhi Sarikaya

Typically, the accuracy of the ML models in these components are improved by manually transcribing and annotating data.

Collaborative Filtering Self-Learning

Locale-agnostic Universal Domain Classification Model in Spoken Language Understanding

no code implementations NAACL 2019 Jihwan Lee, Ruhi Sarikaya, Young-Bum Kim

In this paper, we introduce an approach for leveraging available data across multiple locales sharing the same language to 1) improve domain classification model accuracy in Spoken Language Understanding and user experience even if new locales do not have sufficient data and 2) reduce the cost of scaling the domain classifier to a large number of locales.

Classification domain classification +3

Coupled Representation Learning for Domains, Intents and Slots in Spoken Language Understanding

no code implementations13 Dec 2018 JIhwan Lee, Dongchan Kim, Ruhi Sarikaya, Young-Bum Kim

Our proposed model learns the vector representation of intents based on the slots tied to these intents by aggregating the representations of the slots.

Representation Learning Spoken Language Understanding

Differentiable Greedy Networks

no code implementations30 Oct 2018 Thomas Powers, Rasool Fakoor, Siamak Shakeri, Abhinav Sethy, Amanjit Kainth, Abdel-rahman Mohamed, Ruhi Sarikaya

Optimal selection of a subset of items from a given set is a hard problem that requires combinatorial optimization.

Claim Verification Combinatorial Optimization +1

Efficient Large-Scale Neural Domain Classification with Personalized Attention

no code implementations ACL 2018 Young-Bum Kim, Dongchan Kim, Anjishnu Kumar, Ruhi Sarikaya

In this paper, we explore the task of mapping spoken language utterances to one of thousands of natural language understanding domains in intelligent personal digital assistants (IPDAs).

Classification domain classification +3

Contextual Slot Carryover for Disparate Schemas

no code implementations5 Jun 2018 Chetan Naik, Arpit Gupta, Hancheng Ge, Lambert Mathias, Ruhi Sarikaya

In the slot-filling paradigm, where a user can refer back to slots in the context during a conversation, the goal of the contextual understanding system is to resolve the referring expressions to the appropriate slots in the context.

slot-filling Slot Filling

A Scalable Neural Shortlisting-Reranking Approach for Large-Scale Domain Classification in Natural Language Understanding

no code implementations NAACL 2018 Young-Bum Kim, Dongchan Kim, Joo-Kyung Kim, Ruhi Sarikaya

Intelligent personal digital assistants (IPDAs), a popular real-life application with spoken language understanding capabilities, can cover potentially thousands of overlapping domains for natural language understanding, and the task of finding the best domain to handle an utterance becomes a challenging problem on a large scale.

domain classification General Classification +2

Efficient Large-Scale Domain Classification with Personalized Attention

no code implementations22 Apr 2018 Young-Bum Kim, Dongchan Kim, Anjishnu Kumar, Ruhi Sarikaya

In this paper, we explore the task of mapping spoken language utterances to one of thousands of natural language understanding domains in intelligent personal digital assistants (IPDAs).

Classification domain classification +2

Speaker-Sensitive Dual Memory Networks for Multi-Turn Slot Tagging

no code implementations29 Nov 2017 Young-Bum Kim, Sungjin Lee, Ruhi Sarikaya

In multi-turn dialogs, natural language understanding models can introduce obvious errors by being blind to contextual information.

Natural Language Understanding

Cross-Lingual Transfer Learning for POS Tagging without Cross-Lingual Resources

no code implementations EMNLP 2017 Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, Eric Fosler-Lussier

Evaluating on POS datasets from 14 languages in the Universal Dependencies corpus, we show that the proposed transfer learning model improves the POS tagging performance of the target languages without exploiting any linguistic knowledge between the source language and the target language.

Cross-Lingual Transfer Language Modelling +7

Domainless Adaptation by Constrained Decoding on a Schema Lattice

no code implementations COLING 2016 Young-Bum Kim, Karl Stratos, Ruhi Sarikaya

In many applications such as personal digital assistants, there is a constant need for new domains to increase the system{'}s coverage of user queries.

Multi-Label Classification Spoken Language Understanding

Frustratingly Easy Neural Domain Adaptation

no code implementations COLING 2016 Young-Bum Kim, Karl Stratos, Ruhi Sarikaya

Popular techniques for domain adaptation such as the feature augmentation method of Daum{\'e} III (2009) have mostly been considered for sparse binary-valued features, but not for dense real-valued features such as those used in neural networks.

Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.