Search Results for author: Marco Kuhlmann

Found 30 papers, 5 papers with code

Tractable Parsing for CCGs of Bounded Degree

no code implementations CL (ACL) 2022 Lena Katharina Schiffer, Marco Kuhlmann, Giorgio Satta

Unlike other mildly context-sensitive formalisms, Combinatory Categorial Grammar (CCG) cannot be parsed in polynomial time when the size of the grammar is taken into account.

On the Effects of Video Grounding on Language Models

no code implementations MMMPIE (COLING) 2022 Ehsan Doostmohammadi, Marco Kuhlmann

The results show that the smaller model benefits from video grounding in predicting highly imageable words, while the results for the larger model seem harder to interpret. of lack of grounding, e. g., addressing issues like models’ insufficient commonsense knowledge.

Image Captioning Question Answering +2

Properties and Challenges of LLM-Generated Explanations

no code implementations16 Feb 2024 Jenny Kunz, Marco Kuhlmann

The properties of the generated explanations are influenced by the pre-training corpus and by the target data used for instruction fine-tuning.

How Reliable Are Automatic Evaluation Methods for Instruction-Tuned LLMs?

no code implementations16 Feb 2024 Ehsan Doostmohammadi, Oskar Holmström, Marco Kuhlmann

Work on instruction-tuned Large Language Models (LLMs) has used automatic methods based on text overlap and LLM judgments as cost-effective alternatives to human evaluation.

Cross-Lingual Transfer

Flexible Distribution Alignment: Towards Long-tailed Semi-supervised Learning with Proper Calibration

no code implementations7 Jun 2023 Emanuel Sanchez Aimar, Hannah Helgesen, Yonghao Xu, Marco Kuhlmann, Michael Felsberg

Long-tailed semi-supervised learning (LTSSL) represents a practical scenario for semi-supervised applications, challenged by skewed labeled distributions that bias classifiers.

Data Augmentation

Surface-Based Retrieval Reduces Perplexity of Retrieval-Augmented Language Models

1 code implementation25 May 2023 Ehsan Doostmohammadi, Tobias Norlund, Marco Kuhlmann, Richard Johansson

Inspired by this, we replace the semantic retrieval in Retro with a surface-level method based on BM25, obtaining a significant reduction in perplexity.

Re-Ranking Retrieval +1

On the Generalization Ability of Retrieval-Enhanced Transformers

no code implementations23 Feb 2023 Tobias Norlund, Ehsan Doostmohammadi, Richard Johansson, Marco Kuhlmann

Recent work on the Retrieval-Enhanced Transformer (RETRO) model has shown that off-loading memory from trainable weights to a retrieval database can significantly improve language modeling and match the performance of non-retrieval models that are an order of magnitude larger in size.

Language Modelling Retrieval

Balanced Product of Calibrated Experts for Long-Tailed Recognition

1 code implementation CVPR 2023 Emanuel Sanchez Aimar, Arvi Jonnarth, Michael Felsberg, Marco Kuhlmann

We show how to properly define these distributions and combine the experts in order to achieve unbiased predictions, by proving that the ensemble is Fisher-consistent for minimizing the balanced error.

Long-tail Learning Long-tail Learning on CIFAR-10-LT (ρ=100) +1

Classifier Probes May Just Learn from Linear Context Features

1 code implementation COLING 2020 Jenny Kunz, Marco Kuhlmann

Classifiers trained on auxiliary probing tasks are a popular tool to analyze the representations learned by neural sentence encoders such as BERT and ELMo.

Sentence

End-to-End Negation Resolution as Graph Parsing

no code implementations WS 2020 Robin Kurtz, Stephan Oepen, Marco Kuhlmann

We present a neural end-to-end architecture for negation resolution based on a formulation of the task as a graph parsing problem.

Negation

MRP 2019: Cross-Framework Meaning Representation Parsing

no code implementations CONLL 2019 Stephan Oepen, Omri Abend, Jan Hajic, Daniel Hershcovich, Marco Kuhlmann, Tim O{'}Gorman, Nianwen Xue, Jayeol Chun, Milan Straka, Zdenka Uresova

The 2019 Shared Task at the Conference for Computational Language Learning (CoNLL) was devoted to Meaning Representation Parsing (MRP) across frameworks.

Sentence

Improving Semantic Dependency Parsing with Syntactic Features

no code implementations WS 2019 Robin Kurtz, Daniel Roxbo, Marco Kuhlmann

We extend a state-of-the-art deep neural architecture for semantic dependency parsing with features defined over syntactic dependency trees.

Dependency Parsing Semantic Dependency Parsing

Exploiting Structure in Parsing to 1-Endpoint-Crossing Graphs

no code implementations WS 2017 Robin Kurtz, Marco Kuhlmann

Deep dependency parsing can be cast as the search for maximum acyclic subgraphs in weighted digraphs.

Dependency Parsing Sentence

On the Complexity of CCG Parsing

no code implementations CL 2018 Marco Kuhlmann, Giorgio Satta, Peter Jonsson

We study the parsing complexity of Combinatory Categorial Grammar (CCG) in the formalism of Vijay-Shanker and Weir (1994).

Sentence TAG

Parsing to Noncrossing Dependency Graphs

no code implementations TACL 2015 Marco Kuhlmann, Peter Jonsson

We study the generalization of maximum spanning tree dependency parsing to maximum acyclic subgraphs.

Dependency Parsing Semantic Dependency Parsing

A New Parsing Algorithm for Combinatory Categorial Grammar

no code implementations TACL 2014 Marco Kuhlmann, Giorgio Satta

We present a polynomial-time parsing algorithm for CCG, based on a new decomposition of derivations into small, shareable parts.

Machine Translation

Efficient Parsing for Head-Split Dependency Trees

no code implementations TACL 2013 Giorgio Satta, Marco Kuhlmann

Head splitting techniques have been successfully exploited to improve the asymptotic runtime of parsing algorithms for projective dependency trees, under the arc-factored model.

Dependency Parsing

Cannot find the paper you are looking for? You can Submit a new open access paper.