Search Results for author: Anna Rogers

Found 30 papers, 7 papers with code

‘Just What do You Think You’re Doing, Dave?’ A Checklist for Responsible Data Use in NLP

no code implementations Findings (EMNLP) 2021 Anna Rogers, Timothy Baldwin, Kobi Leins

A key part of the NLP ethics movement is responsible use of data, but exactly what that means or how it can be best achieved remain unclear.

Ethics Position

Machine Reading, Fast and Slow: When Do Models “Understand” Language?

no code implementations COLING 2022 Sagnik Ray Choudhury, Anna Rogers, Isabelle Augenstein

Two of the most fundamental issues in Natural Language Understanding (NLU) at present are: (a) how it can established whether deep learning-based models score highly on NLU benchmarks for the ”right” reasons; and (b) what those reasons would even be.

coreference-resolution counterfactual +2

Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice

no code implementations14 Aug 2023 Alexandra Sasha Luccioni, Anna Rogers

This position paper contributes a definition of LLMs, explicates some of the assumptions made regarding their functionality, and outlines the existing evidence for and against them.

Fact Checking Language Modelling +1

The ROOTS Search Tool: Data Transparency for LLMs

1 code implementation27 Feb 2023 Aleksandra Piktus, Christopher Akiki, Paulo Villegas, Hugo Laurençon, Gérard Dupont, Alexandra Sasha Luccioni, Yacine Jernite, Anna Rogers

ROOTS is a 1. 6TB multilingual text corpus developed for the training of BLOOM, currently the largest language model explicitly accompanied by commensurate data governance efforts.

Language Modelling

BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

6 code implementations9 Nov 2022 BigScience Workshop, :, Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilić, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, Dragomir Radev, Eduardo González Ponferrada, Efrat Levkovizh, Ethan Kim, Eyal Bar Natan, Francesco De Toni, Gérard Dupont, Germán Kruszewski, Giada Pistilli, Hady Elsahar, Hamza Benyamina, Hieu Tran, Ian Yu, Idris Abdulmumin, Isaac Johnson, Itziar Gonzalez-Dios, Javier de la Rosa, Jenny Chim, Jesse Dodge, Jian Zhu, Jonathan Chang, Jörg Frohberg, Joseph Tobing, Joydeep Bhattacharjee, Khalid Almubarak, Kimbo Chen, Kyle Lo, Leandro von Werra, Leon Weber, Long Phan, Loubna Ben allal, Ludovic Tanguy, Manan Dey, Manuel Romero Muñoz, Maraim Masoud, María Grandury, Mario Šaško, Max Huang, Maximin Coavoux, Mayank Singh, Mike Tian-Jian Jiang, Minh Chien Vu, Mohammad A. Jauhar, Mustafa Ghaleb, Nishant Subramani, Nora Kassner, Nurulaqilla Khamis, Olivier Nguyen, Omar Espejel, Ona de Gibert, Paulo Villegas, Peter Henderson, Pierre Colombo, Priscilla Amuok, Quentin Lhoest, Rheza Harliman, Rishi Bommasani, Roberto Luis López, Rui Ribeiro, Salomey Osei, Sampo Pyysalo, Sebastian Nagel, Shamik Bose, Shamsuddeen Hassan Muhammad, Shanya Sharma, Shayne Longpre, Somaieh Nikpoor, Stanislav Silberberg, Suhas Pai, Sydney Zink, Tiago Timponi Torrent, Timo Schick, Tristan Thrush, Valentin Danchev, Vassilina Nikoulina, Veronika Laippala, Violette Lepercq, Vrinda Prabhu, Zaid Alyafeai, Zeerak Talat, Arun Raja, Benjamin Heinzerling, Chenglei Si, Davut Emre Taşar, Elizabeth Salesky, Sabrina J. Mielke, Wilson Y. Lee, Abheesht Sharma, Andrea Santilli, Antoine Chaffin, Arnaud Stiegler, Debajyoti Datta, Eliza Szczechla, Gunjan Chhablani, Han Wang, Harshit Pandey, Hendrik Strobelt, Jason Alan Fries, Jos Rozen, Leo Gao, Lintang Sutawika, M Saiful Bari, Maged S. Al-shaibani, Matteo Manica, Nihal Nayak, Ryan Teehan, Samuel Albanie, Sheng Shen, Srulik Ben-David, Stephen H. Bach, Taewoon Kim, Tali Bers, Thibault Fevry, Trishala Neeraj, Urmish Thakker, Vikas Raunak, Xiangru Tang, Zheng-Xin Yong, Zhiqing Sun, Shaked Brody, Yallow Uri, Hadar Tojarieh, Adam Roberts, Hyung Won Chung, Jaesung Tae, Jason Phang, Ofir Press, Conglong Li, Deepak Narayanan, Hatim Bourfoune, Jared Casper, Jeff Rasley, Max Ryabinin, Mayank Mishra, Minjia Zhang, Mohammad Shoeybi, Myriam Peyrounette, Nicolas Patry, Nouamane Tazi, Omar Sanseviero, Patrick von Platen, Pierre Cornette, Pierre François Lavallée, Rémi Lacroix, Samyam Rajbhandari, Sanchit Gandhi, Shaden Smith, Stéphane Requena, Suraj Patil, Tim Dettmers, Ahmed Baruwa, Amanpreet Singh, Anastasia Cheveleva, Anne-Laure Ligozat, Arjun Subramonian, Aurélie Névéol, Charles Lovering, Dan Garrette, Deepak Tunuguntla, Ehud Reiter, Ekaterina Taktasheva, Ekaterina Voloshina, Eli Bogdanov, Genta Indra Winata, Hailey Schoelkopf, Jan-Christoph Kalo, Jekaterina Novikova, Jessica Zosa Forde, Jordan Clive, Jungo Kasai, Ken Kawamura, Liam Hazan, Marine Carpuat, Miruna Clinciu, Najoung Kim, Newton Cheng, Oleg Serikov, Omer Antverg, Oskar van der Wal, Rui Zhang, Ruochen Zhang, Sebastian Gehrmann, Shachar Mirkin, Shani Pais, Tatiana Shavrina, Thomas Scialom, Tian Yun, Tomasz Limisiewicz, Verena Rieser, Vitaly Protasov, Vladislav Mikhailov, Yada Pruksachatkun, Yonatan Belinkov, Zachary Bamberger, Zdeněk Kasner, Alice Rueda, Amanda Pestana, Amir Feizpour, Ammar Khan, Amy Faranak, Ana Santos, Anthony Hevia, Antigona Unldreaj, Arash Aghagol, Arezoo Abdollahi, Aycha Tammour, Azadeh HajiHosseini, Bahareh Behroozi, Benjamin Ajibade, Bharat Saxena, Carlos Muñoz Ferrandis, Daniel McDuff, Danish Contractor, David Lansky, Davis David, Douwe Kiela, Duong A. Nguyen, Edward Tan, Emi Baylor, Ezinwanne Ozoani, Fatima Mirza, Frankline Ononiwu, Habib Rezanejad, Hessie Jones, Indrani Bhattacharya, Irene Solaiman, Irina Sedenko, Isar Nejadgholi, Jesse Passmore, Josh Seltzer, Julio Bonis Sanz, Livia Dutra, Mairon Samagaio, Maraim Elbadri, Margot Mieskes, Marissa Gerchick, Martha Akinlolu, Michael McKenna, Mike Qiu, Muhammed Ghauri, Mykola Burynok, Nafis Abrar, Nazneen Rajani, Nour Elkott, Nour Fahmy, Olanrewaju Samuel, Ran An, Rasmus Kromann, Ryan Hao, Samira Alizadeh, Sarmad Shubber, Silas Wang, Sourav Roy, Sylvain Viguier, Thanh Le, Tobi Oyebade, Trieu Le, Yoyo Yang, Zach Nguyen, Abhinav Ramesh Kashyap, Alfredo Palasciano, Alison Callahan, Anima Shukla, Antonio Miranda-Escalada, Ayush Singh, Benjamin Beilharz, Bo wang, Caio Brito, Chenxi Zhou, Chirag Jain, Chuxin Xu, Clémentine Fourrier, Daniel León Periñán, Daniel Molano, Dian Yu, Enrique Manjavacas, Fabio Barth, Florian Fuhrimann, Gabriel Altay, Giyaseddin Bayrak, Gully Burns, Helena U. Vrabec, Imane Bello, Ishani Dash, Jihyun Kang, John Giorgi, Jonas Golde, Jose David Posada, Karthik Rangasai Sivaraman, Lokesh Bulchandani, Lu Liu, Luisa Shinzato, Madeleine Hahn de Bykhovetz, Maiko Takeuchi, Marc Pàmies, Maria A Castillo, Marianna Nezhurina, Mario Sänger, Matthias Samwald, Michael Cullan, Michael Weinberg, Michiel De Wolf, Mina Mihaljcic, Minna Liu, Moritz Freidank, Myungsun Kang, Natasha Seelam, Nathan Dahlberg, Nicholas Michio Broad, Nikolaus Muellner, Pascale Fung, Patrick Haller, Ramya Chandrasekhar, Renata Eisenberg, Robert Martin, Rodrigo Canalli, Rosaline Su, Ruisi Su, Samuel Cahyawijaya, Samuele Garda, Shlok S Deshmukh, Shubhanshu Mishra, Sid Kiblawi, Simon Ott, Sinee Sang-aroonsiri, Srishti Kumar, Stefan Schweter, Sushil Bharati, Tanmay Laud, Théo Gigant, Tomoya Kainuma, Wojciech Kusa, Yanis Labrak, Yash Shailesh Bajaj, Yash Venkatraman, Yifan Xu, Yingxin Xu, Yu Xu, Zhe Tan, Zhongli Xie, Zifan Ye, Mathilde Bras, Younes Belkada, Thomas Wolf

Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions.

Language Modelling Multilingual NLP

Machine Reading, Fast and Slow: When Do Models "Understand" Language?

no code implementations15 Sep 2022 Sagnik Ray Choudhury, Anna Rogers, Isabelle Augenstein

Two of the most fundamental challenges in Natural Language Understanding (NLU) at present are: (a) how to establish whether deep learning-based models score highly on NLU benchmarks for the 'right' reasons; and (b) to understand what those reasons would even be.

coreference-resolution counterfactual +2

Outliers Dimensions that Disrupt Transformers Are Driven by Frequency

1 code implementation23 May 2022 Giovanni Puccetti, Anna Rogers, Aleksandr Drozd, Felice Dell'Orletta

While Transformer-based language models are generally very robust to pruning, there is the recently discovered outlier phenomenon: disabling only 48 out of 110M parameters in BERT-base drops its performance by nearly 30% on MNLI.

What Factors Should Paper-Reviewer Assignments Rely On? Community Perspectives on Issues and Ideals in Conference Peer-Review

1 code implementation NAACL 2022 Terne Sasha Thorn Jakobsen, Anna Rogers

Both scientific progress and individual researcher careers depend on the quality of peer review, which in turn depends on paper-reviewer matching.

Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics

1 code implementation EMNLP (insights) 2021 Prajjwal Bhargava, Aleksandr Drozd, Anna Rogers

Much of recent progress in NLU was shown to be due to models' learning dataset-specific heuristics.

Just What do You Think You're Doing, Dave?' A Checklist for Responsible Data Use in NLP

no code implementations14 Sep 2021 Anna Rogers, Tim Baldwin, Kobi Leins

A key part of the NLP ethics movement is responsible use of data, but exactly what that means or how it can be best achieved remain unclear.

Ethics Position

QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering and Reading Comprehension

no code implementations27 Jul 2021 Anna Rogers, Matt Gardner, Isabelle Augenstein

Alongside huge volumes of research on deep learning models in NLP in the recent years, there has been also much work on benchmark datasets needed to track modeling progress.

Question Answering Reading Comprehension

On the Interaction of Belief Bias and Explanations

no code implementations Findings (ACL) 2021 Ana Valeria Gonzalez, Anna Rogers, Anders Søgaard

A myriad of explainability methods have been proposed in recent years, but there is little consensus on how to evaluate them.

Benchmarking

Changing the World by Changing the Data

no code implementations ACL 2021 Anna Rogers

NLP community is currently investing a lot more research and resources into development of deep learning models than training data.

Position

A guide to the dataset explosion in QA, NLI, and commonsense reasoning

no code implementations COLING 2020 Anna Rogers, Anna Rumshisky

Question answering, natural language inference and commonsense reasoning are increasingly popular as general NLP system benchmarks, driving both modeling and dataset work.

Natural Language Inference Question Answering

What Can We Do to Improve Peer Review in NLP?

no code implementations Findings of the Association for Computational Linguistics 2020 Anna Rogers, Isabelle Augenstein

Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious.

When BERT Plays the Lottery, All Tickets Are Winning

no code implementations EMNLP 2020 Sai Prasanna, Anna Rogers, Anna Rumshisky

Large Transformer-based models were shown to be reducible to a smaller number of self-attention heads and layers.

A Primer in BERTology: What we know about how BERT works

no code implementations27 Feb 2020 Anna Rogers, Olga Kovaleva, Anna Rumshisky

Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited.

Calls to Action on Social Media: Detection, Social Impact, and Censorship Potential

no code implementations WS 2019 Anna Rogers, Olga Kovaleva, Anna Rumshisky

Calls to action on social media are known to be effective means of mobilization in social movements, and a frequent target of censorship.

NarrativeTime: Dense Temporal Annotation on a Timeline

no code implementations29 Aug 2019 Anna Rogers, Marzena Karpinska, Ankita Gupta, Vladislav Lialin, Gregory Smelkov, Anna Rumshisky

For the past decade, temporal annotation has been sparse: only a small portion of event pairs in a text was annotated.

Chunking

Revealing the Dark Secrets of BERT

no code implementations IJCNLP 2019 Olga Kovaleva, Alexey Romanov, Anna Rogers, Anna Rumshisky

BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success.

Adversarial Decomposition of Text Representation

2 code implementations NAACL 2019 Alexey Romanov, Anna Rumshisky, Anna Rogers, David Donahue

We show that the proposed method is capable of fine-grained controlled change of these aspects of the input sentence.

Sentence

RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian

no code implementations COLING 2018 Anna Rogers, Alexey Romanov, Anna Rumshisky, Svitlana Volkova, Mikhail Gronas, Alex Gribov

This paper presents RuSentiment, a new dataset for sentiment analysis of social media posts in Russian, and a new set of comprehensive annotation guidelines that are extensible to other languages.

Active Learning General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.