no code implementations • 21 Mar 2024 • Lukas Galke, Limor Raviv
Based on a short literature review, we identify key pressures that have recovered initially absent human patterns in emergent communication models: communicative success, efficiency, learnability, and other psycho-/sociolinguistic factors.
no code implementations • 16 Nov 2023 • Andor Diera, Abdelhalim Dahou, Lukas Galke, Fabian Karl, Florian Sihler, Ansgar Scherp
Language models can serve as a valuable tool for software developers to increase productivity.
1 code implementation • 19 Oct 2023 • Marcel Hoffmann, Lukas Galke, Ansgar Scherp
We study the problem of lifelong graph learning in an open-world scenario, where a model needs to deal with new tasks and potentially unknown classes.
1 code implementation • 23 Feb 2023 • Lukas Galke, Yoav Ram, Limor Raviv
Deep neural networks drive the success of natural language processing.
no code implementations • 22 Apr 2022 • Lukas Galke, Yoav Ram, Limor Raviv
Emergent communication protocols among humans and artificial neural network agents do not yet share the same properties and show some critical mismatches in results.
no code implementations • 8 Apr 2022 • Lukas Galke, Andor Diera, Bao Xin Lin, Bhakti Khera, Tim Meuser, Tushar Singhal, Fabian Karl, Ansgar Scherp
This study reviews and compares methods for single-label and multi-label text classification, categorized into bag-of-words, sequence-based, graph-based, and hierarchical methods.
Multi-Label Classification Multi Label Text Classification +2
1 code implementation • 20 Dec 2021 • Lukas Galke, Iacopo Vagliano, Benedikt Franke, Tobias Zielke, Marcel Hoffmann, Ansgar Scherp
The combination of these two challenges is particularly relevant since newly emerging classes typically resemble only a tiny fraction of the data, adding to the already skewed class distribution.
1 code implementation • 17 Sep 2021 • Lukas Galke, Isabelle Cuber, Christoph Meyer, Henrik Ferdinand Nölscher, Angelina Sonderecker, Ansgar Scherp
We match or exceed the scores of ELMo for all tasks of the GLUE benchmark except for the sentiment analysis task SST-2 and the linguistic acceptability task CoLA.
2 code implementations • ACL 2022 • Lukas Galke, Ansgar Scherp
We show that a wide multi-layer perceptron (MLP) using a Bag-of-Words (BoW) outperforms the recent graph-based models TextGCN and HeteGCN in an inductive text classification setting and is comparable with HyperGAT.
1 code implementation • 10 May 2021 • Iacopo Vagliano, Lukas Galke, Ansgar Scherp
In conclusion, it is crucial to consider the semantics of the item co-occurrence for the choice of an appropriate recommendation model and carefully decide which metadata to exploit.
1 code implementation • 25 Jun 2020 • Lukas Galke, Benedikt Franke, Tobias Zielke, Ansgar Scherp
Graph neural networks (GNNs) have emerged as the standard method for numerous tasks on graph-structured data such as node classification.
1 code implementation • 22 Jul 2019 • Lukas Galke, Florian Mai, Iacopo Vagliano, Ansgar Scherp
We present multi-modal adversarial autoencoders for recommendation and evaluate them on two different tasks: citation recommendation and subject label recommendation.
1 code implementation • 15 May 2019 • Lukas Galke, Iacopo Vagliano, Ansgar Scherp
In this setup, we compare adapting pretrained graph neural networks against retraining from scratch.
1 code implementation • ICLR 2019 • Florian Mai, Lukas Galke, Ansgar Scherp
In order to address this shortcoming, we propose a learning algorithm for the Continuous Matrix Space Model, which we call Continual Multiplication of Words (CMOW).
1 code implementation • 15 May 2017 • Lukas Galke, Florian Mai, Alan Schelten, Dennis Brunsch, Ansgar Scherp
For the first time, we offer a systematic comparison of classification approaches to investigate how far semantic annotations can be conducted using just the metadata of the documents such as titles published as labels on the Linked Open Data cloud.