1 code implementation • 21 Feb 2023 • Christopher Richardson, Sudipta Kar, Anjishnu Kumar, Anand Ramachandran, Omar Zia Khan, Zeynab Raeesy, Abhinav Sethy
The retrieval system is trained on a dataset which contains ~14K multi-turn information-seeking conversations with a valid follow-up question and a set of invalid candidates.
no code implementations • 19 Sep 2017 • Gakuto Kurata, Bhuvana Ramabhadran, George Saon, Abhinav Sethy
Language models (LMs) based on Long Short Term Memory (LSTM) have shown good gains in many automatic speech recognition tasks.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 13 Jan 2017 • Kartik Audhkhasi, Andrew Rosenberg, Abhinav Sethy, Bhuvana Ramabhadran, Brian Kingsbury
The first sub-system is a recurrent neural network (RNN)-based acoustic auto-encoder trained to reconstruct the audio through a finite-dimensional representation.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 22 Dec 2014 • Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran
We propose Diverse Embedding Neural Network (DENN), a novel architecture for language models (LMs).
no code implementations • 28 Dec 2013 • Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran, Shrikanth. S. Narayanan
We present extensions of this decomposition to common regression and classification loss functions, and report a simulation-based analysis of the diversity term and the accuracy of the decomposition.
no code implementations • 30 Oct 2018 • Thomas Powers, Rasool Fakoor, Siamak Shakeri, Abhinav Sethy, Amanjit Kainth, Abdel-rahman Mohamed, Ruhi Sarikaya
Optimal selection of a subset of items from a given set is a hard problem that requires combinatorial optimization.
no code implementations • 31 Aug 2019 • Alexander Hanbo Li, Abhinav Sethy
Neural network models have been very successful at achieving high accuracy on natural language inference (NLI) tasks.
no code implementations • 11 Nov 2019 • Siamak Shakeri, Abhinav Sethy, Cheng Cheng
In this paper we show that knowledge distillation can be used to encourage a model that generates claim independent document encodings to mimic the behavior of a more complex model which generates claim dependent encodings.
no code implementations • 26 Nov 2019 • Alexander Hanbo Li, Abhinav Sethy
In this way, $F$ serves as a feature extractor that maps the input to high-level representation and adds systematical noise using dropout.
no code implementations • 27 Nov 2019 • Siamak Shakeri, Abhinav Sethy
Generating paraphrases that are lexically similar but semantically different is a challenging task.
no code implementations • 29 Sep 2020 • Kellen Gillespie, Ioannis C. Konstantakopoulos, Xingzhi Guo, Vishal Thanvantri Vasudevan, Abhinav Sethy
User interactions with personal assistants like Alexa, Google Home and Siri are typically initiated by a wake term or wakeword.
no code implementations • 30 Oct 2023 • Chris Richardson, Yao Zhang, Kellen Gillespie, Sudipta Kar, Arshdeep Singh, Zeynab Raeesy, Omar Zia Khan, Abhinav Sethy
To overcome these limitations, we propose a novel summary-augmented approach by extending retrieval-augmented personalization with task-aware user summaries generated by LLMs.
no code implementations • 18 Feb 2024 • Shirley Anugrah Hayati, Taehee Jung, Tristan Bodding-Long, Sudipta Kar, Abhinav Sethy, Joo-Kyung Kim, Dongyeop Kang
Fine-tuning large language models (LLMs) with a collection of large and diverse instructions has improved the model's generalization to different tasks, even for unseen tasks.