Search Results for author: Lea Frermann

Found 35 papers, 14 papers with code

Multi-modal Intent Classification for Assistive Robots with Large-scale Naturalistic Datasets

no code implementations ALTA 2021 Karun Varghese Mathew, Venkata S Aditya Tarigoppula, Lea Frermann

These technologies aim to help the users with various reaching and grasping tasks in their daily lives such as picking up an object and transporting it to a desired location; and their utility critically depends on the ease and effectiveness of communication between the user and robot.

intent-classification Intent Classification

Principled Analysis of Energy Discourse across Domains with Thesaurus-based Automatic Topic Labeling

2 code implementations ALTA 2021 Thomas Scelsi, Alfonso Martinez Arranz, Lea Frermann

With the increasing impact of Natural Language Processing tools like topic models in social science research, the experimental rigor and comparability of models and datasets has come under scrutiny.

Topic Models

Connecting the Dots in News Analysis: A Cross-Disciplinary Survey of Media Bias and Framing

no code implementations14 Sep 2023 Gisela Vallejo, Timothy Baldwin, Lea Frermann

The manifestation and effect of bias in news reporting have been central topics in the social sciences for decades, and have received increasing attention in the NLP community recently.

Conflicts, Villains, Resolutions: Towards models of Narrative Media Framing

1 code implementation3 Jun 2023 Lea Frermann, Jiatong Li, Shima Khanehzar, Gosia Mikolajczak

Despite increasing interest in the automatic detection of media frames in NLP, the problem is typically simplified as single-label classification and adopts a topic-like view on frames, evading modelling the broader document-level narrative.

Retrieval

A Large-Scale Multilingual Study of Visual Constraints on Linguistic Selection of Descriptions

no code implementations9 Feb 2023 Uri Berger, Lea Frermann, Gabriel Stanovsky, Omri Abend

We study the relation between visual input and linguistic choices by training classifiers to predict the probability of expressing a property from raw images, and find evidence supporting the claim that linguistic properties are constrained by visual context across languages.

Text Generation

Professional Presentation and Projected Power: A Case Study of Implicit Gender Information in English CVs

no code implementations17 Nov 2022 Jinrui Yang, Sheilla Njoto, Marc Cheong, Leah Ruppanner, Lea Frermann

Gender discrimination in hiring is a pertinent and persistent bias in society, and a common motivating example for exploring bias in NLP.

A Computational Acquisition Model for Multimodal Word Categorization

1 code implementation NAACL 2022 Uri Berger, Gabriel Stanovsky, Omri Abend, Lea Frermann

Recent advances in self-supervised modeling of text and images open new opportunities for computational models of child language acquisition, which is believed to rely heavily on cross-modal signals.

Language Acquisition Object Recognition

Unsupervised Cross-Lingual Transfer of Structured Predictors without Source Data

1 code implementation NAACL 2022 Kemal Kurniawan, Lea Frermann, Philip Schulz, Trevor Cohn

Providing technologies to communities or domains where training data is scarce or protected e. g., for privacy reasons, is becoming increasingly important.

Cross-Lingual Transfer Dependency Parsing +1

Contrastive Learning for Fair Representations

no code implementations22 Sep 2021 Aili Shen, Xudong Han, Trevor Cohn, Timothy Baldwin, Lea Frermann

Trained classification models can unintentionally lead to biased representations and predictions, which can reinforce societal preconceptions and stereotypes.

Attribute Contrastive Learning

Fairness-aware Class Imbalanced Learning

no code implementations EMNLP 2021 Shivashankar Subramanian, Afshin Rahimi, Timothy Baldwin, Trevor Cohn, Lea Frermann

Class imbalance is a common challenge in many NLP tasks, and has clear connections to bias, in that bias in training data often leads to higher accuracy for majority groups at the expense of minority groups.

Fairness Long-tail Learning

PTST-UoM at SemEval-2021 Task 10: Parsimonious Transfer for Sequence Tagging

no code implementations SEMEVAL 2021 Kemal Kurniawan, Lea Frermann, Philip Schulz, Trevor Cohn

This paper describes PTST, a source-free unsupervised domain adaptation technique for sequence tagging, and its application to the SemEval-2021 Task 10 on time expression recognition.

Unsupervised Domain Adaptation

Framing Unpacked: A Semi-Supervised Interpretable Multi-View Model of Media Frames

1 code implementation NAACL 2021 Shima Khanehzar, Trevor Cohn, Gosia Mikolajczak, Andrew Turpin, Lea Frermann

Understanding how news media frame political issues is important due to its impact on public attitudes, yet hard to automate.

PPT: Parsimonious Parser Transfer for Unsupervised Cross-Lingual Adaptation

1 code implementation EACL 2021 Kemal Kurniawan, Lea Frermann, Philip Schulz, Trevor Cohn

Cross-lingual transfer is a leading technique for parsing low-resource languages in the absence of explicit supervision.

Cross-Lingual Transfer

Screenplay Summarization Using Latent Narrative Structure

2 code implementations ACL 2020 Pinelopi Papalampidi, Frank Keller, Lea Frermann, Mirella Lapata

Most general-purpose extractive summarization models are trained on news articles, which are short and present all important information upfront.

Document Summarization Extractive Summarization +3

Book QA: Stories of Challenges and Opportunities

no code implementations WS 2019 Stefanos Angelidis, Lea Frermann, Diego Marcheggiani, Roi Blanco, Llu{\'\i}s M{\`a}rquez

We present a system for answering questions based on the full text of books (BookQA), which first selects book passages given a question at hand, and then uses a memory network to reason and predict an answer.

Retrieval

BookQA: Stories of Challenges and Opportunities

no code implementations2 Oct 2019 Stefanos Angelidis, Lea Frermann, Diego Marcheggiani, Roi Blanco, Lluís Màrquez

We present a system for answering questions based on the full text of books (BookQA), which first selects book passages given a question at hand, and then uses a memory network to reason and predict an answer.

Retrieval

Inducing Document Structure for Aspect-based Summarization

1 code implementation ACL 2019 Lea Frermann, Alex Klementiev, re

In addition to improvements in summarization over topic-agnostic baselines, we demonstrate the benefit of the learnt document structure: we show that our models (a) learn to accurately segment documents by aspect; (b) can leverage the structure to produce both abstractive and extractive aspect-based summaries; and (c) that structure is particularly advantageous for summarizing long documents.

Abstractive Text Summarization

Categorization in the Wild: Generalizing Cognitive Models to Naturalistic Data across Languages

no code implementations23 Feb 2019 Lea Frermann, Mirella Lapata

Categories such as animal or furniture are acquired at an early age and play an important role in processing, organizing, and communicating world knowledge.

World Knowledge

Whodunnit? Crime Drama as a Case for Natural Language Understanding

1 code implementation TACL 2018 Lea Frermann, Shay B. Cohen, Mirella Lapata

In this paper we argue that crime drama exemplified in television programs such as CSI:Crime Scene Investigation is an ideal testbed for approximating real-world natural language understanding and the complex inferences associated with it.

Natural Language Understanding

Prosodic Features from Large Corpora of Child-Directed Speech as Predictors of the Age of Acquisition of Words

1 code implementation27 Sep 2017 Lea Frermann, Michael C. Frank

The impressive ability of children to acquire language is a widely studied phenomenon, and the factors influencing the pace and patterns of word learning remains a subject of active research.

Inducing Semantic Micro-Clusters from Deep Multi-View Representations of Novels

no code implementations EMNLP 2017 Lea Frermann, Gy{\"o}rgy Szarvas

Automatically understanding the plot of novels is important both for informing literary scholarship and applications such as summarization or recommendation.

A Bayesian Model of Diachronic Meaning Change

no code implementations TACL 2016 Lea Frermann, Mirella Lapata

Word meanings change over time and an automated procedure for extracting this information from text would be useful for historical exploratory studies, information retrieval or question answering.

Change Detection General Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.