Search Results for author: Kay Rottmann

Found 4 papers, 1 papers with code

Training data reduction for multilingual Spoken Language Understanding systems

no code implementations ICON 2021 Anmol Bansal, Anjali Shenoy, Krishna Chaitanya Pappu, Kay Rottmann, Anurag Dwarakanath

Fine-tuning self-supervised pre-trained language models such as BERT has significantly improved state-of-the-art performance on natural language processing tasks.

intent-classification Intent Classification +1

A Survey on Knowledge Editing of Neural Networks

no code implementations30 Oct 2023 Vittorio Mazzia, Alessandro Pedrani, Andrea Caciolai, Kay Rottmann, Davide Bernardi

That is expensive, unreliable, and incompatible with the current trend of large self-supervised pre-training, making it necessary to find more efficient and effective methods for adapting neural network models to changing data.

knowledge editing Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.