no code implementations • 12 Apr 2024 • Ji-Ung Lee, Marc E. Pfetsch, Iryna Gurevych
This work proposes a novel method to generate C-Tests; a deviated form of cloze tests (a gap filling exercise) where only the last part of a word is turned into a gap.
no code implementations • 29 Jun 2023 • Ji-Ung Lee, Haritz Puerto, Betty van Aken, Yuki Arase, Jessica Zosa Forde, Leon Derczynski, Andreas Rücklé, Iryna Gurevych, Roy Schwartz, Emma Strubell, Jesse Dodge
Many recent improvements in NLP stem from the development and use of large pre-trained language models (PLMs) with billions of parameters.
1 code implementation • 25 Apr 2023 • Jan-Christoph Klie, Ji-Ung Lee, Kevin Stowe, Gözde Gül Şahin, Nafise Sadat Moosavi, Luke Bates, Dominic Petrak, Richard Eckart de Castilho, Iryna Gurevych
Citizen Science is an alternative to crowdsourcing that is relatively unexplored in the context of NLP.
3 code implementations • 13 Mar 2023 • Ulf A. Hamster, Ji-Ung Lee, Alexander Geyken, Iryna Gurevych
Training and inference on edge devices often requires an efficient setup due to computational limitations.
no code implementations • 31 Aug 2022 • Marcos Treviso, Ji-Ung Lee, Tianchu Ji, Betty van Aken, Qingqing Cao, Manuel R. Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Colin Raffel, Pedro H. Martins, André F. T. Martins, Jessica Zosa Forde, Peter Milder, Edwin Simpson, Noam Slonim, Jesse Dodge, Emma Strubell, Niranjan Balasubramanian, Leon Derczynski, Iryna Gurevych, Roy Schwartz
Recent work in natural language processing (NLP) has yielded appealing results from scaling model parameters and training data; however, using only scale to improve performance means that resource consumption also grows.
2 code implementations • 30 Aug 2022 • Haishuo Fang, Ji-Ung Lee, Nafise Sadat Moosavi, Iryna Gurevych
In contrast to conventional, predefined activation functions, RAFs can adaptively learn optimal activation functions during training according to input data.
1 code implementation • 16 Aug 2022 • Lorenz Stangier, Ji-Ung Lee, Yuxi Wang, Marvin Müller, Nicholas Frick, Joachim Metternich, Iryna Gurevych
We evaluate TexPrax in a user-study with German factory employees who ask their colleagues for solutions on problems that arise during their daily work.
1 code implementation • CL (ACL) 2022 • Ji-Ung Lee, Jan-Christoph Klie, Iryna Gurevych
Annotation studies often require annotators to familiarize themselves with the task, its annotation scheme, and the data domain.
1 code implementation • ACL 2021 • Tilman Beck, Ji-Ung Lee, Christina Viehmann, Marcus Maurer, Oliver Quiring, Iryna Gurevych
This work investigates the use of interactively updated label suggestions to improve upon the efficiency of gathering annotations on the task of opinion mining in German Covid-19 social media data.
1 code implementation • ACL 2020 • Ji-Ung Lee, Christian M. Meyer, Iryna Gurevych
Existing approaches to active learning maximize the system performance by sampling unlabeled instances for annotation that yield the most efficient training.
1 code implementation • ACL 2019 • Ji-Ung Lee, Erik Schwan, Christian M. Meyer
We propose two novel manipulation strategies for increasing and decreasing the difficulty of C-tests automatically.
1 code implementation • NAACL 2019 • Steffen Eger, Gözde Gül Şahin, Andreas Rücklé, Ji-Ung Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, Iryna Gurevych
Visual modifications to text are often used to obfuscate offensive comments in social media (e. g., "! d10t") or as a writing style ("1337" in "leet speak"), among other scenarios.