Search Results for author: Akshat Shrivastava

Found 19 papers, 4 papers with code

Small But Funny: A Feedback-Driven Approach to Humor Distillation

no code implementations28 Feb 2024 Sahithya Ravi, Patrick Huber, Akshat Shrivastava, Aditya Sagar, Ahmed Aly, Vered Shwartz, Arash Einolghozati

The emergence of Large Language Models (LLMs) has brought to light promising language generation capabilities, particularly in performing tasks like complex reasoning and creative writing.

Text Generation

Augmenting text for spoken language understanding with Large Language Models

no code implementations17 Sep 2023 Roshan Sharma, Suyoun Kim, Daniel Lazar, Trang Le, Akshat Shrivastava, Kwanghoon Ahn, Piyush Kansal, Leda Sari, Ozlem Kalinli, Michael Seltzer

Using the generated text with JAT and TTS for spoken semantic parsing improves EM on STOP by 1. 4% and 2. 6% absolute for existing and new domains respectively.

Semantic Parsing Spoken Language Understanding

Modality Confidence Aware Training for Robust End-to-End Spoken Language Understanding

no code implementations22 Jul 2023 Suyoun Kim, Akshat Shrivastava, Duc Le, Ju Lin, Ozlem Kalinli, Michael L. Seltzer

End-to-end (E2E) spoken language understanding (SLU) systems that generate a semantic parse from speech have become more promising recently.

speech-recognition Speech Recognition +1

TreePiece: Faster Semantic Parsing via Tree Tokenization

no code implementations30 Mar 2023 Sid Wang, Akshat Shrivastava, Sasha Livshits

Autoregressive (AR) encoder-decoder neural networks have proved successful in many NLP problems, including Semantic Parsing -- a task that translates natural language to machine-readable parse trees.

Semantic Parsing

Privately Customizing Prefinetuning to Better Match User Data in Federated Learning

no code implementations17 Feb 2023 Charlie Hou, Hongyuan Zhan, Akshat Shrivastava, Sid Wang, Aleksandr Livshits, Giulia Fanti, Daniel Lazar

To this end, we propose FreD (Federated Private Fr\'echet Distance) -- a privately computed distance between a prefinetuning dataset and federated datasets.

Federated Learning Language Modelling +2

Data-Efficiency with a Single GPU: An Exploration of Transfer Methods for Small Language Models

no code implementations8 Oct 2022 Alon Albalak, Akshat Shrivastava, Chinnadhurai Sankar, Adithya Sagar, Mike Ross

Multi-task learning (MTL), instruction tuning, and prompting have recently been shown to improve the generalizability of large language models to new tasks.

Multi-Task Learning

STOP: A dataset for Spoken Task Oriented Semantic Parsing

1 code implementation29 Jun 2022 Paden Tomasello, Akshat Shrivastava, Daniel Lazar, Po-chun Hsu, Duc Le, Adithya Sagar, Ali Elkahky, Jade Copet, Wei-Ning Hsu, Yossi Adi, Robin Algayres, Tu Ahn Nguyen, Emmanuel Dupoux, Luke Zettlemoyer, Abdelrahman Mohamed

Furthermore, in addition to the human-recorded audio, we are releasing a TTS-generated version to benchmark the performance for low-resource domain adaptation of end-to-end SLU systems.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Deliberation Model for On-Device Spoken Language Understanding

no code implementations4 Apr 2022 Duc Le, Akshat Shrivastava, Paden Tomasello, Suyoun Kim, Aleksandr Livshits, Ozlem Kalinli, Michael L. Seltzer

We propose a novel deliberation-based approach to end-to-end (E2E) spoken language understanding (SLU), where a streaming automatic speech recognition (ASR) model produces the first-pass hypothesis and a second-pass natural language understanding (NLU) component generates the semantic parse by conditioning on both ASR's text and audio embeddings.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Retrieve-and-Fill for Scenario-based Task-Oriented Semantic Parsing

no code implementations2 Feb 2022 Akshat Shrivastava, Shrey Desai, Anchit Gupta, Ali Elkahky, Aleksandr Livshits, Alexander Zotov, Ahmed Aly

We tackle this problem by introducing scenario-based semantic parsing: a variant of the original task which first requires disambiguating an utterance's "scenario" (an intent-slot template with variable leaf spans) before generating its frame, complete with ontology and utterance tokens.

Retrieval Semantic Parsing

RETRONLU: Retrieval Augmented Task-Oriented Semantic Parsing

no code implementations NLP4ConvAI (ACL) 2022 Vivek Gupta, Akshat Shrivastava, Adithya Sagar, Armen Aghajanyan, Denis Savenkov

While large pre-trained language models accumulate a lot of knowledge in their parameters, it has been demonstrated that augmenting it with non-parametric retrieval-based memory has a number of benefits from accuracy improvements to data efficiency for knowledge-focused tasks, such as question answering.

Question Answering Retrieval +1

Assessing Data Efficiency in Task-Oriented Semantic Parsing

no code implementations10 Jul 2021 Shrey Desai, Akshat Shrivastava, Justin Rill, Brian Moran, Safiyyah Saleem, Alexander Zotov, Ahmed Aly

Data efficiency, despite being an attractive characteristic, is often challenging to measure and optimize for in task-oriented semantic parsing; unlike exact match, it can require both model- and domain-specific setups, which have, historically, varied widely across experiments.

Semantic Parsing

Latency-Aware Neural Architecture Search with Multi-Objective Bayesian Optimization

no code implementations ICML Workshop AutoML 2021 David Eriksson, Pierce I-Jen Chuang, Samuel Daulton, Peng Xia, Akshat Shrivastava, Arun Babu, Shicong Zhao, Ahmed Aly, Ganesh Venkatesh, Maximilian Balandat

When tuning the architecture and hyperparameters of large machine learning models for on-device deployment, it is desirable to understand the optimal trade-offs between on-device latency and model accuracy.

Bayesian Optimization Natural Language Understanding +1

Low-Resource Task-Oriented Semantic Parsing via Intrinsic Modeling

no code implementations15 Apr 2021 Shrey Desai, Akshat Shrivastava, Alexander Zotov, Ahmed Aly

Task-oriented semantic parsing models typically have high resource requirements: to support new ontologies (i. e., intents and slots), practitioners crowdsource thousands of samples for supervised fine-tuning.

Semantic Parsing

Span Pointer Networks for Non-Autoregressive Task-Oriented Semantic Parsing

no code implementations Findings (EMNLP) 2021 Akshat Shrivastava, Pierce Chuang, Arun Babu, Shrey Desai, Abhinav Arora, Alexander Zotov, Ahmed Aly

An effective recipe for building seq2seq, non-autoregressive, task-oriented parsers to map utterances to semantic frames proceeds in three steps: encoding an utterance $x$, predicting a frame's length |y|, and decoding a |y|-sized frame with utterance and ontology tokens.

Cross-Lingual Transfer Quantization +2

Non-Autoregressive Semantic Parsing for Compositional Task-Oriented Dialog

1 code implementation NAACL 2021 Arun Babu, Akshat Shrivastava, Armen Aghajanyan, Ahmed Aly, Angela Fan, Marjan Ghazvininejad

Semantic parsing using sequence-to-sequence models allows parsing of deeper representations compared to traditional word tagging based models.

Semantic Parsing

Conversational Semantic Parsing

no code implementations EMNLP 2020 Armen Aghajanyan, Jean Maillard, Akshat Shrivastava, Keith Diedrick, Mike Haeger, Haoran Li, Yashar Mehdad, Ves Stoyanov, Anuj Kumar, Mike Lewis, Sonal Gupta

In this paper, we propose a semantic representation for such task-oriented conversational systems that can represent concepts such as co-reference and context carryover, enabling comprehensive understanding of queries in a session.

dialog state tracking Semantic Parsing

Better Fine-Tuning by Reducing Representational Collapse

3 code implementations ICLR 2021 Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, Sonal Gupta

Although widely adopted, existing approaches for fine-tuning pre-trained language models have been shown to be unstable across hyper-parameter settings, motivating recent work on trust region methods.

Abstractive Text Summarization Cross-Lingual Natural Language Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.