no code implementations • CL (ACL) 2022 • Rochelle Choenni, Ekaterina Shutova
The results provide insight into their information-sharing mechanisms and suggest that these linguistic properties are encoded jointly across typologically similar languages in these models.
no code implementations • 27 Nov 2024 • Zhi Zhang, Jiayi Shen, Congfeng Cao, Gaole Dai, Shiji Zhou, Qizhe Zhang, Shanghang Zhang, Ekaterina Shutova
Advancing towards generalist agents necessitates the concurrent processing of multiple tasks using a unified model, thereby underscoring the growing significance of simultaneous model training on multiple downstream tasks.
no code implementations • 27 Nov 2024 • Zhi Zhang, Srishti Yadav, Fengze Han, Ekaterina Shutova
While there exists a variety of studies investigating the processing of linguistic information within large language models, little is currently known about the inner working mechanism of MLLMs and how linguistic and visual information interact within these models.
1 code implementation • 12 Oct 2024 • Ivo Verhoeven, Pushkar Mishra, Ekaterina Shutova
This paper introduces misinfo-general, a benchmark dataset for evaluating misinformation models' ability to perform out-of-distribution generalisation.
1 code implementation • 29 Aug 2024 • Rochelle Choenni, Ekaterina Shutova
Improving the alignment of Large Language Models (LLMs) with respect to the cultural values that they encode has become an increasingly important topic.
no code implementations • 12 Aug 2024 • Jay Owers, Ekaterina Shutova, Martha Lewis
In physics, density matrices are used to represent mixed states, i. e. probabilistic mixtures of pure states.
no code implementations • 9 Jul 2024 • Joy Crosbie, Ekaterina Shutova
Our results show that even a minimal ablation of induction heads leads to ICL performance decreases of up to ~32% for abstract pattern recognition tasks, bringing the performance close to random.
1 code implementation • 4 Jul 2024 • Gianluca Michelli, Xiaoyu Tong, Ekaterina Shutova
Metaphors are part of everyday language and shape the way in which we conceptualize the world.
no code implementations • 20 Jun 2024 • Rochelle Choenni, Sara Rajaee, Christof Monz, Ekaterina Shutova
While multilingual language models (MLMs) have been trained on 100+ languages, they are typically only evaluated across a handful of them due to a lack of available test data in most languages.
1 code implementation • 5 Jun 2024 • Alina Leidinger, Robert van Rooij, Ekaterina Shutova
Recent scholarship on reasoning in LLMs has supplied evidence of impressive performance and flexible adaptation to machine generated or human feedback.
no code implementations • 21 May 2024 • Rochelle Choenni, Anne Lauscher, Ekaterina Shutova
Texts written in different languages reflect different culturally-dependent beliefs of their writers.
1 code implementation • 2 Apr 2024 • Ivo Verhoeven, Pushkar Mishra, Rahel Beloch, Helen Yannakoudakis, Ekaterina Shutova
This mismatch can be partially attributed to the limitations of current evaluation setups that neglect the rapid evolution of online content and the underlying social graph.
no code implementations • 18 Mar 2024 • Xiaoyu Tong, Rochelle Choenni, Martha Lewis, Ekaterina Shutova
Metaphor understanding is therefore an essential task for large language models (LLMs).
1 code implementation • CVPR 2024 • Zhi Zhang, Qizhe Zhang, Zijun Gao, Renrui Zhang, Ekaterina Shutova, Shiji Zhou, Shanghang Zhang
With the growing size of pre-trained models, full fine-tuning and storing all the parameters for various downstream tasks is costly and infeasible.
no code implementations • 14 Nov 2023 • Rochelle Choenni, Ekaterina Shutova, Dan Garrette
Recent work has proposed explicitly inducing language-wise modularity in multilingual LMs via sparse fine-tuning (SFT) on per-language subnetworks as a means of better guiding cross-lingual sharing.
1 code implementation • 3 Nov 2023 • Alina Leidinger, Robert van Rooij, Ekaterina Shutova
The latest generation of LLMs can be prompted to achieve impressive zero-shot or few-shot performance in many NLP tasks.
no code implementations • 31 Oct 2023 • Claire E. Stevenson, Mathilde ter Veen, Rochelle Choenni, Han L. J. van der Maas, Ekaterina Shutova
We conclude that the LLMs we tested indeed tend to solve verbal analogies by association with C like children do.
1 code implementation • 28 Oct 2023 • Giulio Starace, Konstantinos Papakostas, Rochelle Choenni, Apostolos Panagiotopoulos, Matteo Rosati, Alina Leidinger, Ekaterina Shutova
Large Language Models (LLMs) exhibit impressive performance on a range of NLP tasks, due to the general-purpose linguistic knowledge acquired during pretraining.
no code implementations • 22 May 2023 • Rochelle Choenni, Dan Garrette, Ekaterina Shutova
We further study how different fine-tuning languages influence model performance on a given test language and find that they can both reinforce and complement the knowledge acquired from data of the test language itself.
no code implementations • 15 May 2023 • Simone Tedeschi, Johan Bos, Thierry Declerck, Jan Hajic, Daniel Hershcovich, Eduard H. Hovy, Alexander Koller, Simon Krek, Steven Schockaert, Rico Sennrich, Ekaterina Shutova, Roberto Navigli
In the last five years, there has been a significant focus in Natural Language Processing (NLP) on developing larger Pretrained Language Models (PLMs) and introducing benchmarks such as SuperGLUE and SQuAD to measure their abilities in language understanding, reasoning, and reading comprehension.
1 code implementation • 17 Feb 2023 • Zhi Zhang, Helen Yannakoudakis, XianTong Zhen, Ekaterina Shutova
The task of multimodal referring expression comprehension (REC), aiming at localizing an image region described by a natural language expression, has recently received increasing attention within the research comminity.
no code implementations • 25 Jan 2023 • Niels van der Heijden, Ekaterina Shutova, Helen Yannakoudakis
We present FewShotTextGCN, a novel method designed to effectively utilize the properties of word-document graphs for improved learning in low-resource settings.
2 code implementations • 28 Nov 2022 • Tamara Czinczoll, Helen Yannakoudakis, Pushkar Mishra, Ekaterina Shutova
This paper examines the encoding of analogy in large-scale pretrained language models, such as BERT and GPT-2.
no code implementations • 31 Oct 2022 • Rochelle Choenni, Dan Garrette, Ekaterina Shutova
Large multilingual language models typically share their parameters across all languages, which enables cross-lingual task transfer, but learning can also be hindered when training updates from different languages are in conflict.
1 code implementation • 31 Oct 2022 • Avyav Kumar Singh, Ekaterina Shutova, Helen Yannakoudakis
Existing approaches to few-shot learning in NLP rely on large language models (LLMs) and/or fine-tuning of these to generalise on out-of-distribution data.
4 code implementations • 9 Jun 2022 • Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, ZiRui Wang, Ziyi Wu
BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.
no code implementations • EMNLP 2021 • Rochelle Choenni, Ekaterina Shutova, Robert van Rooij
In this paper, we investigate what types of stereotypical information are captured by pretrained language models.
1 code implementation • ACL 2021 • Rishav Hada, Sohi Sudhir, Pushkar Mishra, Helen Yannakoudakis, Saif M. Mohammad, Ekaterina Shutova
On social media platforms, hateful and offensive language negatively impact the mental well-being of users and the participation of people from diverse backgrounds.
no code implementations • ACL 2021 • Yingjun Du, Nithin Holla, XianTong Zhen, Cees G. M. Snoek, Ekaterina Shutova
A critical challenge faced by supervised word sense disambiguation (WSD) is the lack of large annotated datasets with sufficient coverage of words in their diversity of senses.
no code implementations • NAACL 2021 • Xiaoyu Tong, Ekaterina Shutova, Martha Lewis
Metaphor is an indispensable part of human cognition and everyday communication.
1 code implementation • ACL 2022 • Anna Langedijk, Verna Dankers, Phillip Lippe, Sander Bos, Bryan Cardenas Guevara, Helen Yannakoudakis, Ekaterina Shutova
Meta-learning, or learning to learn, is a technique that can help to overcome resource scarcity in cross-lingual NLP problems, by enabling fast adaptation to new tasks.
no code implementations • 8 Apr 2021 • Vinodkumar Prabhakaran, Marek Rei, Ekaterina Shutova
Metaphors are widely used in political rhetoric as an effective framing device.
no code implementations • Findings (EMNLP) 2021 • Pushkar Mishra, Helen Yannakoudakis, Ekaterina Shutova
Specifically, we review and analyze the state of the art methods that leverage user or community information to enhance the understanding and detection of abusive language.
1 code implementation • EACL 2021 • Pere-Lluís Huguet-Cabot, David Abadi, Agneta Fischer, Ekaterina Shutova
We set a baseline for two tasks related to populist attitudes and present a set of multi-task learning models that leverage and demonstrate the importance of emotion and group identification as auxiliary tasks.
Ranked #1 on Populist attitude on Us Vs. Them
1 code implementation • EACL 2021 • Niels van der Heijden, Helen Yannakoudakis, Pushkar Mishra, Ekaterina Shutova
The great majority of languages in the world are considered under-resourced for the successful application of deep learning methods.
Cross-Lingual Document Classification Document Classification +2
1 code implementation • 23 Dec 2020 • Phillip Lippe, Nithin Holla, Shantanu Chandra, Santhosh Rajamanickam, Georgios Antoniou, Ekaterina Shutova, Helen Yannakoudakis
An increasingly common expression of online hate speech is multimodal in nature and comes in the form of memes.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Pere-Llu{\'\i}s Huguet Cabot, Verna Dankers, David Abadi, Agneta Fischer, Ekaterina Shutova
There has been an increased interest in modelling political discourse within the natural language processing (NLP) community, in tasks such as political bias and misinformation detection, among others.
no code implementations • 24 Oct 2020 • Rochelle Choenni, Ekaterina Shutova
Multilingual sentence encoders are widely used to transfer NLP models across languages.
no code implementations • 27 Sep 2020 • Rochelle Choenni, Ekaterina Shutova
Multilingual sentence encoders have seen much success in cross-lingual model transfer for downstream NLP tasks.
1 code implementation • 10 Sep 2020 • Nithin Holla, Pushkar Mishra, Helen Yannakoudakis, Ekaterina Shutova
Lifelong learning requires models that can continuously learn from sequential streams of data without suffering catastrophic forgetting due to shifts in data distributions.
1 code implementation • 14 Aug 2020 • Shantanu Chandra, Pushkar Mishra, Helen Yannakoudakis, Madhav Nimishakavi, Marzieh Saeidi, Ekaterina Shutova
Existing research has modeled the structure, style, content, and patterns in dissemination of online posts, as well as the demographic traits of users who interact with them.
no code implementations • WS 2020 • Verna Dankers, Karan Malhotra, Gaurav Kudva, Volodymyr Medentsiy, Ekaterina Shutova
Existing approaches to metaphor processing typically rely on local features, such as immediate lexico-syntactic contexts or information within a given sentence.
no code implementations • ACL 2020 • Santhosh Rajamanickam, Pushkar Mishra, Helen Yannakoudakis, Ekaterina Shutova
The rise of online communication platforms has been accompanied by some undesirable effects, such as the proliferation of aggressive and abusive behaviour online.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Nithin Holla, Pushkar Mishra, Helen Yannakoudakis, Ekaterina Shutova
Meta-learning aims to solve this problem by training a model on a large number of few-shot tasks, with an objective to learn new tasks quickly from a small number of examples.
no code implementations • TACL 2020 • Vesna G. Djokic, Jean Maillard, Luana Bulat, Ekaterina Shutova
We evaluate a range of semantic models (word embeddings, compositional, and visual models) in their ability to decode brain activity associated with reading of both literal and metaphoric sentences.
no code implementations • 15 Dec 2019 • Niels van der Heijden, Samira Abnar, Ekaterina Shutova
The lack of annotated data in many languages is a well-known challenge within the field of multilingual natural language processing (NLP).
no code implementations • IJCNLP 2019 • Verna Dankers, Marek Rei, Martha Lewis, Ekaterina Shutova
Metaphors allow us to convey emotion by connecting physical experiences and abstract concepts.
no code implementations • 13 Aug 2019 • Pushkar Mishra, Helen Yannakoudakis, Ekaterina Shutova
Abuse on the Internet represents an important societal problem of our time.
no code implementations • ACL 2019 • Vesna Djokic, Jean Maillard, Luana Bulat, Ekaterina Shutova
Recent work shows that distributional semantic models can be used to decode patterns of brain activity associated with individual words and sentence meanings.
no code implementations • SEMEVAL 2019 • Christopher Davis, Luana Bulat, Anita Lilla Vero, Ekaterina Shutova
Multimodal semantic models that extend linguistic representations with additional perceptual input have proved successful in a range of natural language processing (NLP) tasks.
no code implementations • SEMEVAL 2019 • Guy Aglionby, Chris Davis, Pushkar Mishra, Andrew Caines, Helen Yannakoudakis, Marek Rei, Ekaterina Shutova, Paula Buttery
We describe the CAMsterdam team entry to the SemEval-2019 Shared Task 6 on offensive language identification in Twitter data.
no code implementations • NAACL 2019 • Pushkar Mishra, Marco del Tredici, Helen Yannakoudakis, Ekaterina Shutova
Abuse on the Internet represents a significant societal problem of our time.
1 code implementation • NAACL 2019 • Jesse Mu, Helen Yannakoudakis, Ekaterina Shutova
Most current approaches to metaphor identification use restricted linguistic contexts, e. g. by considering only a verb's arguments or the sentence containing a phrase.
no code implementations • 14 Feb 2019 • Pushkar Mishra, Marco del Tredici, Helen Yannakoudakis, Ekaterina Shutova
The rapid growth of social media in recent years has fed into some highly undesirable phenomena such as proliferation of abusive and offensive language on the Internet.
no code implementations • WS 2018 • Pushkar Mishra, Helen Yannakoudakis, Ekaterina Shutova
The current state of the art approaches to abusive language detection, based on recurrent neural networks, do not explicitly address this problem and resort to a generic OOV (out of vocabulary) embedding for unseen words.
1 code implementation • COLING 2018 • Pushkar Mishra, Marco del Tredici, Helen Yannakoudakis, Ekaterina Shutova
The rapid growth of social media in recent years has fed into some highly undesirable phenomena such as proliferation of hateful and offensive language on the Internet.
no code implementations • CL 2019 • Edoardo Maria Ponti, Helen O'Horan, Yevgeni Berzak, Ivan Vulić, Roi Reichart, Thierry Poibeau, Ekaterina Shutova, Anna Korhonen
Linguistic typology aims to capture structural and semantic variation across the world's languages.
no code implementations • WS 2018 • Chee Wee (Ben) Leong, Beata Beigman Klebanov, Ekaterina Shutova
As the community working on computational approaches to figurative language is growing and as methods and data become increasingly diverse, it is important to create widely shared empirical knowledge of the level of system performance in a range of contexts, thus facilitating progress in this area.
no code implementations • EMNLP 2017 • Marek Rei, Luana Bulat, Douwe Kiela, Ekaterina Shutova
The ubiquity of metaphor in our everyday communication makes it an important problem for natural language understanding.
no code implementations • WS 2017 • Ekaterina Kochmar, Ekaterina Shutova
Using methods of statistical analysis, we investigate how semantic knowledge is acquired in English as a second language and evaluate the pace of development across a number of predicate types and content word combinations, as well as across the levels of language proficiency and native languages.
no code implementations • EMNLP 2017 • Luana Bulat, Stephen Clark, Ekaterina Shutova
Research in computational semantics is increasingly guided by our understanding of human semantic processing.
no code implementations • SEMEVAL 2017 • Ekaterina Shutova, Andreas Wundsam, Helen Yannakoudakis
Frame-semantic parsing and semantic role labelling, that aim to automatically assign semantic roles to arguments of verbs in a sentence, have become an active strand of research in NLP.
no code implementations • EACL 2017 • Luana Bulat, Stephen Clark, Ekaterina Shutova
One of the key problems in computational metaphor modelling is finding the optimal level of abstraction of semantic representations, such that these are able to capture and generalise metaphorical mechanisms.
no code implementations • CL 2017 • Ekaterina Shutova, Lin Sun, Elkin Dar{\'\i}o Guti{\'e}rrez, Patricia Lichtenstein, Srini Narayanan
We investigate different levels and types of supervision (learning from linguistic examples vs. learning from a given set of metaphorical mappings vs. learning without annotation) in flat and hierarchical, unconstrained and constrained clustering settings.
no code implementations • 28 Sep 2016 • Ekaterina Shutova, Patricia Lichtenstein
Natural language processing techniques are increasingly applied to identify social trends and predict behavior based on large text collections.
no code implementations • TACL 2016 • E.D. Guti{\'e}rrez, Ekaterina Shutova, Patricia Lichtenstein, Gerard de Melo, Luca Gilardi
Understanding cross-cultural differences has important implications for world affairs and many aspects of the life of society.