no code implementations • ICON 2021 • Anmol Bansal, Anjali Shenoy, Krishna Chaitanya Pappu, Kay Rottmann, Anurag Dwarakanath
Fine-tuning self-supervised pre-trained language models such as BERT has significantly improved state-of-the-art performance on natural language processing tasks.
no code implementations • 30 Oct 2023 • Vittorio Mazzia, Alessandro Pedrani, Andrea Caciolai, Kay Rottmann, Davide Bernardi
That is expensive, unreliable, and incompatible with the current trend of large self-supervised pre-training, making it necessary to find more efficient and effective methods for adapting neural network models to changing data.
no code implementations • 13 Dec 2022 • Christopher Hench, Charith Peris, Jack FitzGerald, Kay Rottmann
Despite recent progress in Natural Language Understanding (NLU), the creation of multilingual NLU systems remains a challenge.
5 code implementations • 18 Apr 2022 • Jack FitzGerald, Christopher Hench, Charith Peris, Scott Mackie, Kay Rottmann, Ana Sanchez, Aaron Nash, Liam Urbach, Vishesh Kakarala, Richa Singh, Swetha Ranganath, Laurie Crist, Misha Britan, Wouter Leeuwis, Gokhan Tur, Prem Natarajan
We present the MASSIVE dataset--Multilingual Amazon Slu resource package (SLURP) for Slot-filling, Intent classification, and Virtual assistant Evaluation.
Ranked #1 on Slot Filling on MASSIVE