Search Results for author: Ákos Kádár

Found 15 papers, 10 papers with code

Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations

1 code implementation LREC 2022 Chris Emmery, Ákos Kádár, Grzegorz Chrupała, Walter Daelemans

The perturbed data, models, and code are available for reproduction at https://github. com/cmry/augtox

Subword Pooling Makes a Difference

1 code implementation22 Feb 2021 Judit Ács, Ákos Kádár, András Kornai

For POS tagging both of these strategies perform poorly and the best choice is to use a small LSTM over the subwords.

NER POS +1

Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling

1 code implementation EACL 2021 Chris Emmery, Ákos Kádár, Grzegorz Chrupała

Written language contains stylistic cues that can be exploited to automatically infer a variety of potentially sensitive author information.

Privacy Preserving

Bootstrapping Disjoint Datasets for Multilingual Multimodal Representation Learning

no code implementations9 Nov 2019 Ákos Kádár, Grzegorz Chrupała, Afra Alishahi, Desmond Elliott

However, we do find that using an external machine translation model to generate the synthetic data sets results in better performance.

Machine Translation Representation Learning +4

Improving Lemmatization of Non-Standard Languages with Joint Learning

2 code implementations NAACL 2019 Enrique Manjavacas, Ákos Kádár, Mike Kestemont

Lemmatization of standard languages is concerned with (i) abstracting over morphological differences and (ii) resolving token-lemma ambiguities of inflected words in order to map them to a dictionary headword.

Language Modelling LEMMA +3

Revisiting the Hierarchical Multiscale LSTM

no code implementations COLING 2018 Ákos Kádár, Marc-Alexandre Côté, Grzegorz Chrupała, Afra Alishahi

Hierarchical Multiscale LSTM (Chung et al., 2016a) is a state-of-the-art language model that learns interpretable structure from character-level input.

Language Modelling

NeuralREG: An end-to-end approach to referring expression generation

1 code implementation ACL 2018 Thiago Castro Ferreira, Diego Moussallem, Ákos Kádár, Sander Wubben, Emiel Krahmer

Traditionally, Referring Expression Generation (REG) models first decide on the form and then on the content of references to discourse entities in text, typically relying on features such as salience and grammatical function.

Referring Expression Referring expression generation

On the difficulty of a distributional semantics of spoken language

no code implementations WS 2019 Grzegorz Chrupała, Lieke Gelderloos, Ákos Kádár, Afra Alishahi

In the domain of unsupervised learning most work on speech has focused on discovering low-level constructs such as phoneme inventories or word-like units.

Imagination improves Multimodal Translation

no code implementations IJCNLP 2017 Desmond Elliott, Ákos Kádár

We decompose multimodal translation into two sub-tasks: learning to translate and learning visually grounded representations.

Translation

Representation of linguistic form and function in recurrent neural networks

1 code implementation CL 2017 Ákos Kádár, Grzegorz Chrupała, Afra Alishahi

We present novel methods for analyzing the activation patterns of RNNs from a linguistic point of view and explore the types of linguistic structure they learn.

Language Modelling Sentence +1

Learning language through pictures

1 code implementation IJCNLP 2015 Grzegorz Chrupała, Ákos Kádár, Afra Alishahi

We propose Imaginet, a model of learning visually grounded representations of language from coupled textual and visual input.

Sentence Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.