Search Results for author: David Mareček

Found 12 papers, 2 papers with code

Analyzing BERT’s Knowledge of Hypernymy via Prompting

no code implementations EMNLP (BlackboxNLP) 2021 Michael Hanna, David Mareček

The high performance of large pretrained language models (LLMs) such as BERT on NLP tasks has prompted questions about BERT’s linguistic capabilities, and how they differ from humans’.

Hypernym Discovery

Introducing Orthogonal Constraint in Structural Probes

1 code implementation ACL 2021 Tomasz Limisiewicz, David Mareček

With the recent success of pre-trained models in NLP, a significant focus was put on interpreting their representations.

Word Embeddings

Syntax Representation in Word Embeddings and Neural Networks -- A Survey

no code implementations2 Oct 2020 Tomasz Limisiewicz, David Mareček

Neural networks trained on natural language processing tasks capture syntax even though it is not provided as a supervision signal.

Language Modelling Machine Translation +3

Measuring Memorization Effect in Word-Level Neural Networks Probing

no code implementations29 Jun 2020 Rudolf Rosa, Tomáš Musil, David Mareček

In classical probing, a classifier is trained on the representations to extract the target linguistic information.

Machine Translation Translation

Universal Dependencies according to BERT: both more specific and more general

1 code implementation Findings of the Association for Computational Linguistics 2020 Tomasz Limisiewicz, Rudolf Rosa, David Mareček

This work focuses on analyzing the form and extent of syntactic abstraction captured by BERT by extracting labeled dependency trees from self-attentions.

Inducing Syntactic Trees from BERT Representations

no code implementations27 Jun 2019 Rudolf Rosa, David Mareček

We use the English model of BERT and explore how a deletion of one word in a sentence changes representations of other words.

Language Modelling

Derivational Morphological Relations in Word Embeddings

no code implementations6 Jun 2019 Tomáš Musil, Jonáš Vidra, David Mareček

Derivation is a type of a word-formation process which creates new words from existing ones by adding, changing or deleting affixes.

Word Embeddings

From Balustrades to Pierre Vinken: Looking for Syntax in Transformer Self-Attentions

no code implementations WS 2019 David Mareček, Rudolf Rosa

We inspect the multi-head self-attention in Transformer NMT encoders for three source languages, looking for patterns that could have a syntactic interpretation.

Input Combination Strategies for Multi-Source Transformer Decoder

no code implementations12 Nov 2018 Jindřich Libovický, Jindřich Helcl, David Mareček

In multi-source sequence-to-sequence tasks, the attention mechanism can be modeled in several ways.


Cannot find the paper you are looking for? You can Submit a new open access paper.