Search Results for author: Dieuwke Hupkes

Found 42 papers, 18 papers with code

Generalising to German Plural Noun Classes, from the Perspective of a Recurrent Neural Network

1 code implementation CoNLL (EMNLP) 2021 Verna Dankers, Anna Langedijk, Kate McCurdy, Adina Williams, Dieuwke Hupkes

Inflectional morphology has since long been a useful testing ground for broader questions about generalisation in language and the viability of neural network models as cognitive models of language.

Can Transformers Process Recursive Nested Constructions, Like Humans?

no code implementations COLING 2022 Yair Lakretz, Théo Desbordes, Dieuwke Hupkes, Stanislas Dehaene

A recent study evaluated recursive processing in recurrent neural language models (RNN-LMs) and showed that such models perform below chance level on embedded dependencies within nested constructions – a prototypical example of recursion in natural language.

From Form(s) to Meaning: Probing the Semantic Depths of Language Models Using Multisense Consistency

1 code implementation18 Apr 2024 Xenia Ohmer, Elia Bruni, Dieuwke Hupkes

The staggering pace with which the capabilities of large language models (LLMs) are increasing, as measured by a range of commonly used natural language understanding (NLU) benchmarks, raises many questions regarding what "understanding" means for a language model and how it compares to human understanding.

Language Modelling Natural Language Understanding

The ICL Consistency Test

no code implementations8 Dec 2023 Lucas Weber, Elia Bruni, Dieuwke Hupkes

Just like the previous generation of task-tuned models, large language models (LLMs) that are adapted to tasks via prompt-based methods like in-context-learning (ICL) perform well in some setups but not in others.

In-Context Learning Natural Language Inference

WorldSense: A Synthetic Benchmark for Grounded Reasoning in Large Language Models

1 code implementation27 Nov 2023 Youssef Benchekroun, Megi Dervishi, Mark Ibrahim, Jean-Baptiste Gaya, Xavier Martinet, Grégoire Mialon, Thomas Scialom, Emmanuel Dupoux, Dieuwke Hupkes, Pascal Vincent

We propose WorldSense, a benchmark designed to assess the extent to which LLMs are consistently able to sustain tacit world models, by testing how they draw simple inferences from descriptions of simple arrangements of entities.

In-Context Learning

Memorisation Cartography: Mapping out the Memorisation-Generalisation Continuum in Neural Machine Translation

no code implementations9 Nov 2023 Verna Dankers, Ivan Titov, Dieuwke Hupkes

When training a neural network, it will quickly memorise some source-target mappings from your dataset but never learn some others.

counterfactual Machine Translation +2

The Validity of Evaluation Results: Assessing Concurrence Across Compositionality Benchmarks

1 code implementation26 Oct 2023 Kaiser Sun, Adina Williams, Dieuwke Hupkes

NLP models have progressed drastically in recent years, according to numerous datasets proposed to evaluate performance.

Mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning

no code implementations20 Oct 2023 Lucas Weber, Elia Bruni, Dieuwke Hupkes

Finding the best way of adapting pre-trained language models to a task is a big challenge in current NLP.

In-Context Learning

Curriculum Learning with Adam: The Devil Is in the Wrong Details

no code implementations23 Aug 2023 Lucas Weber, Jaap Jumelet, Paul Michel, Elia Bruni, Dieuwke Hupkes

We present a number of different case studies with different common hand-crafted and automated CL approaches to illustrate this phenomenon, and we find that none of them outperforms optimisation with only Adam with well-chosen hyperparameters.

Separating form and meaning: Using self-consistency to quantify task understanding across multiple senses

1 code implementation19 May 2023 Xenia Ohmer, Elia Bruni, Dieuwke Hupkes

At the staggering pace with which the capabilities of large language models (LLMs) are increasing, creating future-proof evaluation sets to assess their understanding becomes more and more challenging.

Benchmarking

Text Characterization Toolkit

no code implementations4 Oct 2022 Daniel Simig, Tianlu Wang, Verna Dankers, Peter Henderson, Khuyagbaatar Batsuren, Dieuwke Hupkes, Mona Diab

In NLP, models are usually evaluated by reporting single-number performance scores on a number of readily available benchmarks, without much deeper analysis.

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

3 code implementations9 Jun 2022 Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, ZiRui Wang, Ziyi Wu

BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.

Common Sense Reasoning Math +1

Towards Interactive Language Modeling

no code implementations14 Dec 2021 Maartje ter Hoeve, Evgeny Kharitonov, Dieuwke Hupkes, Emmanuel Dupoux

As a first contribution we present a road map in which we detail the steps that need to be taken towards interactive language modeling.

Language Acquisition Language Modelling

Sparse Interventions in Language Models with Differentiable Masking

no code implementations13 Dec 2021 Nicola De Cao, Leon Schmid, Dieuwke Hupkes, Ivan Titov

Typically, interpretation methods i) do not guarantee that the model actually uses the encoded information, and ii) do not discover small subsets of neurons responsible for a considered phenomenon.

Bias Detection Gender Bias Detection

Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans

no code implementations14 Oct 2021 Yair Lakretz, Théo Desbordes, Dieuwke Hupkes, Stanislas Dehaene

A recent study evaluated recursive processing in recurrent neural language models (RNN-LMs) and showed that such models perform below chance level on embedded dependencies within nested constructions -- a prototypical example of recursion in natural language.

How BPE Affects Memorization in Transformers

no code implementations6 Oct 2021 Eugene Kharitonov, Marco Baroni, Dieuwke Hupkes

In this work, we demonstrate that the size of the subword vocabulary learned by Byte-Pair Encoding (BPE) greatly affects both ability and tendency of standard Transformer models to memorize training data, even when we control for the number of learned parameters.

Memorization

Language Models Use Monotonicity to Assess NPI Licensing

1 code implementation Findings (ACL) 2021 Jaap Jumelet, Milica Denić, Jakub Szymanik, Dieuwke Hupkes, Shane Steinert-Threlkeld

We investigate the semantic knowledge of language models (LMs), focusing on (1) whether these LMs create categories of linguistic environments based on their semantic monotonicity properties, and (2) whether these categories play a similar role in LMs as in human language understanding, using negative polarity item licensing as a case study.

Linguistic Acceptability

Attention vs non-attention for a Shapley-based explanation method

no code implementations NAACL (DeeLIO) 2021 Tom Kersten, Hugh Mee Wong, Jaap Jumelet, Dieuwke Hupkes

In this work, we consider Contextual Decomposition (CD) -- a Shapley-based input feature attribution method that has been shown to work well for recurrent NLP models -- and we test the extent to which it is useful for models that contain attention operations.

Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little

no code implementations EMNLP 2021 Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, Douwe Kiela

A possible explanation for the impressive performance of masked language model (MLM) pre-training is that such models have learned to represent the syntactic structures prevalent in classical NLP pipelines.

Language Modelling Masked Language Modeling

Language Modelling as a Multi-Task Problem

no code implementations EACL 2021 Lucas Weber, Jaap Jumelet, Elia Bruni, Dieuwke Hupkes

In this paper, we propose to study language modelling as a multi-task problem, bringing together three strands of research: multi-task learning, linguistics, and interpretability.

Language Modelling Multi-Task Learning

The Grammar of Emergent Languages

1 code implementation EMNLP 2020 Oskar van der Wal, Silvan de Boer, Elia Bruni, Dieuwke Hupkes

In this paper, we consider the syntactic properties of languages emerged in referential games, using unsupervised grammar induction (UGI) techniques originally designed to analyse natural language.

Mechanisms for Handling Nested Dependencies in Neural-Network Language Models and Humans

no code implementations19 Jun 2020 Yair Lakretz, Dieuwke Hupkes, Alessandra Vergallito, Marco Marelli, Marco Baroni, Stanislas Dehaene

We studied whether a modern artificial neural network trained with "deep learning" methods mimics a central aspect of human sentence processing, namely the storing of grammatical number and gender information in working memory and its use in long-distance agreement (e. g., capturing the correct number agreement between subject and verb when they are separated by other phrases).

Sentence

Co-evolution of language and agents in referential games

1 code implementation EACL 2021 Gautier Dagan, Dieuwke Hupkes, Elia Bruni

However, they do not take into account a second constraint considered to be fundamental for the shape of human language: that it must be learnable by new language learners.

Location Attention for Extrapolation to Longer Sequences

no code implementations ACL 2020 Yann Dubois, Gautier Dagan, Dieuwke Hupkes, Elia Bruni

We hypothesize that models with a separate content- and location-based attention are more likely to extrapolate than those with common attention mechanisms.

Compositionality decomposed: how do neural networks generalise?

1 code implementation22 Aug 2019 Dieuwke Hupkes, Verna Dankers, Mathijs Mul, Elia Bruni

Despite a multitude of empirical studies, little consensus exists on whether neural networks are able to generalise compositionally, a controversy that, in part, stems from a lack of agreement about what it means for a neural model to be compositional.

Assessing incrementality in sequence-to-sequence models

1 code implementation WS 2019 Dennis Ulmer, Dieuwke Hupkes, Elia Bruni

Since their inception, encoder-decoder models have successfully been applied to a wide array of problems in computational linguistics.

On the Realization of Compositionality in Neural Networks

no code implementations WS 2019 Joris Baan, Jana Leible, Mitja Nikolaus, David Rau, Dennis Ulmer, Tim Baumgärtner, Dieuwke Hupkes, Elia Bruni

We present a detailed comparison of two types of sequence to sequence models trained to conduct a compositional task.

Transcoding compositionally: using attention to find more generalizable solutions

1 code implementation WS 2019 Kris Korrel, Dieuwke Hupkes, Verna Dankers, Elia Bruni

While sequence-to-sequence models have shown remarkable generalization power across several natural language tasks, their construct of solutions are argued to be less compositional than human-like generalization.

The emergence of number and syntax units in LSTM language models

1 code implementation NAACL 2019 Yair Lakretz, German Kruszewski, Theo Desbordes, Dieuwke Hupkes, Stanislas Dehaene, Marco Baroni

Importantly, the behaviour of these units is partially controlled by other units independently shown to track syntactic structure.

Language Modelling

Formal models of Structure Building in Music, Language and Animal Songs

no code implementations16 Jan 2019 Willem Zuidema, Dieuwke Hupkes, Geraint Wiggins, Constance Scharff, Martin Rohrmeier

Human language, music and a variety of animal vocalisations constitute ways of sonic communication that exhibit remarkable structural complexity.

Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items

no code implementations WS 2018 Jaap Jumelet, Dieuwke Hupkes

In this paper, we attempt to link the inner workings of a neural language model to linguistic theory, focusing on a complex phenomenon well discussed in formal linguis- tics: (negative) polarity items.

Language Modelling Sentence

Analysing the potential of seq-to-seq models for incremental interpretation in task-oriented dialogue

no code implementations WS 2018 Dieuwke Hupkes, Sanne Bouwmeester, Raquel Fernández

We investigate how encoder-decoder models trained on a synthetic dataset of task-oriented dialogues process disfluencies, such as hesitations and self-corrections.

Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information

no code implementations WS 2018 Mario Giulianelli, Jacqueline Harding, Florian Mohnert, Dieuwke Hupkes, Willem Zuidema

We show that `diagnostic classifiers', trained to predict number from the internal states of a language model, provide a detailed understanding of how, when, and where this information is represented.

Language Modelling

Learning compositionally through attentive guidance

no code implementations20 May 2018 Dieuwke Hupkes, Anand Singh, Kris Korrel, German Kruszewski, Elia Bruni

While neural network models have been successfully applied to domains that require substantial generalisation skills, recent studies have implied that they struggle when solving the task they are trained on requires inferring its underlying compositional structure.

Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure

1 code implementation28 Nov 2017 Dieuwke Hupkes, Sara Veldhoen, Willem Zuidema

To develop an understanding of what the recurrent network encodes, visualisation techniques alone do not suffice.

POS-tagging of Historical Dutch

no code implementations LREC 2016 Dieuwke Hupkes, Rens Bod

We present a study of the adequacy of current methods that are used for POS-tagging historical Dutch texts, as well as an exploration of the influence of employing different techniques to improve upon the current practice.

Domain Adaptation POS +1

Cannot find the paper you are looking for? You can Submit a new open access paper.