no code implementations • insights (ACL) 2022 • Yue Ding, Karolis Martinkus, Damian Pascual, Simon Clematide, Roger Wattenhofer
Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic - the embeddings are distributed in a narrow cone.
1 code implementation • 9 Feb 2022 • Francesco Fusco, Damian Pascual, Peter Staar, Diego Antognini
Large pre-trained language models based on transformer architecture have drastically changed the natural language processing (NLP) landscape.
no code implementations • 27 Sep 2021 • Yue Ding, Karolis Martinkus, Damian Pascual, Simon Clematide, Roger Wattenhofer
Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic - the embeddings are distributed in a narrow cone.
1 code implementation • Findings (EMNLP) 2021 • Damian Pascual, Beni Egressy, Clara Meister, Ryan Cotterell, Roger Wattenhofer
Large pre-trained language models have repeatedly shown their ability to produce fluent text.
no code implementations • NAACL (BioNLP) 2021 • Damian Pascual, Sandro Luck, Roger Wattenhofer
Unlike the general trend in language processing, no transformer model has been reported to reach high performance on this task.
1 code implementation • 12 Jan 2021 • Sumu Zhao, Damian Pascual, Gino Brunner, Roger Wattenhofer
In this work we provide new insights into the transformer architecture, and in particular, its best-known variant, BERT.
1 code implementation • 31 Dec 2020 • Damian Pascual, Beni Egressy, Florian Bolli, Roger Wattenhofer
Given that state-of-the-art language models are too large to be trained from scratch in a manageable time, it is desirable to control these models without re-training them.
1 code implementation • 10 Sep 2020 • Nicolas Affolter, Beni Egressy, Damian Pascual, Roger Wattenhofer
In the case of language stimuli, recent studies have shown that it is possible to decode fMRI scans into an embedding of the word a subject is reading.
no code implementations • 25 Aug 2020 • Lukas Faber, Sandro Luck, Damian Pascual, Andreas Roth, Gino Brunner, Roger Wattenhofer
The automatic generation of medleys, i. e., musical pieces formed by different songs concatenated via smooth transitions, is not well studied in the current literature.
no code implementations • EACL 2021 • Damian Pascual, Gino Brunner, Roger Wattenhofer
This way, we propose a distinction between local patterns revealed by attention and global patterns that refer back to the input, and analyze BERT from both angles.
1 code implementation • 22 Jul 2019 • Damian Pascual, Amir Aminifar, David Atienza, Philippe Ryvlin, Roger Wattenhofer
In this work, we generate synthetic seizure-like brain electrical activities, i. e., EEG signals, that can be used to train seizure detection algorithms, alleviating the need for recorded data.