no code implementations • MMMPIE (COLING) 2022 • Ehsan Doostmohammadi, Marco Kuhlmann
The results show that the smaller model benefits from video grounding in predicting highly imageable words, while the results for the larger model seem harder to interpret. of lack of grounding, e. g., addressing issues like models’ insufficient commonsense knowledge.
no code implementations • CL (ACL) 2022 • Lena Katharina Schiffer, Marco Kuhlmann, Giorgio Satta
Unlike other mildly context-sensitive formalisms, Combinatory Categorial Grammar (CCG) cannot be parsed in polynomial time when the size of the grammar is taken into account.
1 code implementation • EMNLP (BlackboxNLP) 2021 • Jenny Kunz, Marco Kuhlmann
Previous work on probing word representations for linguistic knowledge has focused on interpolation tasks.
1 code implementation • COLING 2022 • Jenny Kunz, Marco Kuhlmann
Probing studies have extensively explored where in neural language models linguistic information is located.
no code implementations • 18 Oct 2024 • Denitsa Saynova, Lovisa Hagström, Moa Johansson, Richard Johansson, Marco Kuhlmann
Previous interpretations of language models (LMs) miss important distinctions in how these models process factual information.
1 code implementation • 7 May 2024 • Elliot Gestrin, Marco Kuhlmann, Jendrik Seipp
Today's classical planners are powerful, but modeling input tasks in formats such as PDDL is tedious and error-prone.
no code implementations • 16 Feb 2024 • Jenny Kunz, Marco Kuhlmann
The properties of the generated explanations are influenced by the pre-training corpus and by the target data used for instruction fine-tuning.
no code implementations • 16 Feb 2024 • Ehsan Doostmohammadi, Oskar Holmström, Marco Kuhlmann
In evaluating how well automatic methods align with human evaluations, correlation metrics are the most commonly employed method despite their inherent limitations when dealing with ties and different scales.
2 code implementations • 7 Jun 2023 • Emanuel Sanchez Aimar, Nathaniel Helgesen, Yonghao Xu, Marco Kuhlmann, Michael Felsberg
Long-tailed semi-supervised learning (LTSSL) represents a practical scenario for semi-supervised applications, challenged by skewed labeled distributions that bias classifiers.
1 code implementation • 25 May 2023 • Ehsan Doostmohammadi, Tobias Norlund, Marco Kuhlmann, Richard Johansson
Inspired by this, we replace the semantic retrieval in Retro with a surface-level method based on BM25, obtaining a significant reduction in perplexity.
no code implementations • 23 Feb 2023 • Tobias Norlund, Ehsan Doostmohammadi, Richard Johansson, Marco Kuhlmann
Recent work on the Retrieval-Enhanced Transformer (RETRO) model has shown that off-loading memory from trainable weights to a retrieval database can significantly improve language modeling and match the performance of non-retrieval models that are an order of magnitude larger in size.
2 code implementations • CVPR 2023 • Emanuel Sanchez Aimar, Arvi Jonnarth, Michael Felsberg, Marco Kuhlmann
We show how to properly define these distributions and combine the experts in order to achieve unbiased predictions, by proving that the ensemble is Fisher-consistent for minimizing the balanced error.
Long-tail Learning
Long-tail Learning on CIFAR-10-LT (ρ=100)
+1
1 code implementation • COLING 2020 • Jenny Kunz, Marco Kuhlmann
Classifiers trained on auxiliary probing tasks are a popular tool to analyze the representations learned by neural sentence encoders such as BERT and ELMo.
no code implementations • WS 2020 • Robin Kurtz, Stephan Oepen, Marco Kuhlmann
We present a neural end-to-end architecture for negation resolution based on a formulation of the task as a graph parsing problem.
no code implementations • CONLL 2019 • Stephan Oepen, Omri Abend, Jan Hajic, Daniel Hershcovich, Marco Kuhlmann, Tim O{'}Gorman, Nianwen Xue, Jayeol Chun, Milan Straka, Zdenka Uresova
The 2019 Shared Task at the Conference for Computational Language Learning (CoNLL) was devoted to Meaning Representation Parsing (MRP) across frameworks.
no code implementations • WS 2019 • Robin Kurtz, Daniel Roxbo, Marco Kuhlmann
We extend a state-of-the-art deep neural architecture for semantic dependency parsing with features defined over syntactic dependency trees.
no code implementations • WS 2017 • Robin Kurtz, Marco Kuhlmann
Deep dependency parsing can be cast as the search for maximum acyclic subgraphs in weighted digraphs.
no code implementations • CL 2018 • Marco Kuhlmann, Giorgio Satta, Peter Jonsson
We study the parsing complexity of Combinatory Categorial Grammar (CCG) in the formalism of Vijay-Shanker and Weir (1994).
no code implementations • LREC 2016 • Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov{\'a}, Dan Flickinger, Jan Haji{\v{c}}, Angelina Ivanova, Zde{\v{n}}ka Ure{\v{s}}ov{\'a}
We announce a new language resource for research on semantic parsing, a large, carefully curated collection of semantic dependency graphs representing multiple linguistic traditions.
no code implementations • TACL 2015 • Marco Kuhlmann, Peter Jonsson
We study the generalization of maximum spanning tree dependency parsing to maximum acyclic subgraphs.
no code implementations • TACL 2014 • Marco Kuhlmann, Giorgio Satta
We present a polynomial-time parsing algorithm for CCG, based on a new decomposition of derivations into small, shareable parts.
no code implementations • WS 2013 • Djam{\'e} Seddah, Reut Tsarfaty, S K{\"u}bler, ra, C, Marie ito, Jinho D. Choi, Rich{\'a}rd Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola Galletebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi{\'o}rkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woli{\'n}ski, Alina Wr{\'o}blewska, Eric Villemonte de la Clergerie
no code implementations • TACL 2013 • Giorgio Satta, Marco Kuhlmann
Head splitting techniques have been successfully exploited to improve the asymptotic runtime of parsing algorithms for projective dependency trees, under the arc-factored model.