no code implementations • NAACL (WOAH) 2022 • Niclas Hertzberg, Robin Cooper, Elina Lindgren, Björn Rönnerstrand, Gregor Rettenegger, Ellen Breitholtz, Asad Sayeed
“Dogwhistles” are expressions intended by the speaker have two messages: a socially-unacceptable “in-group” message understood by a subset of listeners, and a benign message intended for the out-group.
1 code implementation • 4 Dec 2024 • Xudong Hong, Sharid Loáiciga, Asad Sayeed
Active Curriculum Language Modeling (ACLM; Hong et al., 2023) is a learner directed approach to training a language model.
no code implementations • 20 Jan 2023 • Xudong Hong, Asad Sayeed, Khushboo Mehra, Vera Demberg, Bernt Schiele
The image sequences are aligned with a total of 12K stories which were collected via crowdsourcing given the image sequences and a set of grounded characters from the corresponding image sequence.
no code implementations • 13 Oct 2022 • Aashish Arora, Harshitha Malireddi, Daniel Bauer, Asad Sayeed, Yuval Marton
Unlike previous work, our model does not require pre-training or fine-tuning on additional tasks, beyond using off-the-shelf (static or contextual) embeddings and supervision.
no code implementations • 9 Aug 2022 • Mughilan Muthupari, Samrat Halder, Asad Sayeed, Yuval Marton
Observing that for certain NLP tasks, such as semantic role prediction or thematic fit estimation, random embeddings perform as well as pretrained embeddings, we explore what settings allow for this and examine where most of the learning is encoded: the word embeddings, the semantic role embeddings, or ``the network''.
no code implementations • Joint Conference on Lexical and Computational Semantics 2021 • Bill Noble, Asad Sayeed, Raquel Fern{\'a}ndez, Staffan Larsson
Just as the meaning of words is tied to the communities in which they are used, so too is semantic change.
1 code implementation • LREC 2022 • Yuval Marton, Asad Sayeed
We compare the old and new corpus versions' impact on a verb--argument fit modeling task, using a high-performing neural approach.
no code implementations • CONLL 2020 • Xudong Hong, Rakshith Shetty, Asad Sayeed, Khushboo Mehra, Vera Demberg, Bernt Schiele
A problem in automatically generated stories for image sequences is that they use overly generic vocabulary and phrase structure and fail to match the distributional characteristics of human-generated text.
Ranked #5 on
Visual Storytelling
on VIST
no code implementations • LREC 2020 • Ida R{\o}rmann Olsen, Bolette Pedersen, Asad Sayeed
Our aim is to identify suitable sense representations for NLP in Danish.
no code implementations • LREC 2020 • Vidya Somashekarappa, Christine Howes, Asad Sayeed
This paper introduces an approach for annotating eye gaze considering both its social and the referential functions in multi-modal human-human dialogue.
no code implementations • LREC 2020 • Sharid Lo{\'a}iciga, Christian Hardmeier, Asad Sayeed
Non-nominal co-reference is much less studied than nominal coreference, partly because of the lack of annotated corpora.
no code implementations • WS 2019 • Fangzhou Zhai, Vera Demberg, Pavel Shkadzko, Wei Shi, Asad Sayeed
The model exploits a symbolic text planning module to produce text plans, thus reducing the demand of data; a neural surface realization module then generates fluent text conditioned on the text plan.
no code implementations • WS 2019 • Asad Sayeed, Matthias Lindemann, Vera Demberg
Sentences like {``}Every child climbed a tree{''} have at least two interpretations depending on the precedence order of the universal quantifier and the indefinite.
no code implementations • SEMEVAL 2018 • Xudong Hong, Asad Sayeed, Vera Demberg
Human world knowledge contains information about prototypical events and their participants and locations.
no code implementations • TACL 2017 • Ashutosh Modi, Ivan Titov, Vera Demberg, Asad Sayeed, Manfred Pinkal
Recent research in psycholinguistics has provided increasing evidence that humans predict upcoming content.