Search Results for author: Adam Lopez

Found 55 papers, 8 papers with code

Adaptor Grammars for Unsupervised Paradigm Clustering

no code implementations ACL (SIGMORPHON) 2021 Kate McCurdy, Sharon Goldwater, Adam Lopez

This work describes the Edinburgh submission to the SIGMORPHON 2021 Shared Task 2 on unsupervised morphological paradigm clustering.

Clustering Task 2

First Tragedy, then Parse: History Repeats Itself in the New Era of Large Language Models

no code implementations8 Nov 2023 Naomi Saphra, Eve Fleisig, Kyunghyun Cho, Adam Lopez

Many NLP researchers are experiencing an existential crisis triggered by the astonishing success of ChatGPT and other systems based on large language models (LLMs).

Machine Translation

Taming the Sigmoid Bottleneck: Provably Argmaxable Sparse Multi-Label Classification

1 code implementation16 Oct 2023 Andreas Grivas, Antonio Vergari, Adam Lopez

We then show that they can be prevented in practice by introducing a Discrete Fourier Transform (DFT) output layer, which guarantees that all sparse label combinations with up to $k$ active labels are argmaxable.

Multi-class Classification Multi-Label Classification

SemEval 2021 Task 7: HaHackathon, Detecting and Rating Humor and Offense

no code implementations SEMEVAL 2021 J. A. Meaney, Steven Wilson, Luis Chiruzzo, Adam Lopez, Walid Magdy

Our subtasks were binary humor detection, prediction of humor and offense ratings, and a novel controversy task: to predict if the variance in the humor ratings was higher than a specific threshold.

Humor Detection

Intrinsic Bias Metrics Do Not Correlate with Application Bias

no code implementations ACL 2021 Seraphina Goldfarb-Tarrant, Rebecca Marchant, Ricardo Muñoz Sanchez, Mugdha Pandya, Adam Lopez

We urge researchers working on debiasing to focus on extrinsic measures of bias, and to make using these measures more feasible via creation of new challenge sets and annotated test data.

Word Embeddings

LSTMs Compose---and Learn---Bottom-Up

no code implementations Findings of the Association for Computational Linguistics 2020 Naomi Saphra, Adam Lopez

To explore the inductive biases that cause these compositional representations to arise during training, we conduct simple experiments on synthetic data.

LemMED: Fast and Effective Neural Morphological Analysis with Short Context Windows

no code implementations21 Oct 2020 Aibek Makazhanov, Sharon Goldwater, Adam Lopez

We present LemMED, a character-level encoder-decoder for contextual morphological analysis (combined lemmatization and tagging).

Lemmatization Morphological Analysis +1

LSTMs Compose (and Learn) Bottom-Up

no code implementations6 Oct 2020 Naomi Saphra, Adam Lopez

To explore the inductive biases that cause these compositional representations to arise during training, we conduct simple experiments on synthetic data.

Inflecting when there's no majority: Limitations of encoder-decoder neural networks as cognitive models for German plurals

no code implementations ACL 2020 Kate McCurdy, Sharon Goldwater, Adam Lopez

Encoder-decoder models do generalize the most frequently produced plural class, but do not show human-like variability or 'regular' extension of these other plural markers.

Word Interdependence Exposes How LSTMs Compose Representations

no code implementations27 Apr 2020 Naomi Saphra, Adam Lopez

Recent work in NLP shows that LSTM language models capture compositional structure in language data.

How to Evaluate Word Representations of Informal Domain?

1 code implementation12 Nov 2019 Yekun Chai, Naomi Saphra, Adam Lopez

Diverse word representations have surged in most state-of-the-art natural language processing (NLP) applications.

Word Embeddings

Cross-lingual topic prediction for speech using translations

no code implementations29 Aug 2019 Sameer Bansal, Herman Kamper, Adam Lopez, Sharon Goldwater

Given a large amount of unannotated speech in a low-resource language, can we classify the speech utterances by topic?

Humanitarian Speech-to-Text Translation +1

Sparsity Emerges Naturally in Neural Language Models

no code implementations ICML Workshop Deep_Phenomen 2019 Naomi Saphra, Adam Lopez

Concerns about interpretability, computational resources, and principled inductive priors have motivated efforts to engineer sparse neural models for NLP tasks.

Do LSTMs Learn Compositionally?

no code implementations28 May 2019 Naomi Saphra, Adam Lopez

LSTM-based language models exhibit compositionality in their representations, but how this behavior emerges over the course of training has not been explored.

Explicitly modeling case improves neural dependency parsing

no code implementations WS 2018 Clara Vania, Adam Lopez

Neural dependency parsing models that compose word representations from characters can presumably exploit morphosyntax when making attachment decisions.

Dependency Parsing Multi-Task Learning

`Indicatements' that character language models learn English morpho-syntactic units and regularities

no code implementations WS 2018 Yova Kementchedjhieva, Adam Lopez

Character language models have access to surface morphological patterns, but it is not clear whether or \textit{how} they learn abstract morphological regularities.

Feature Engineering Language Modelling +3

Understanding Learning Dynamics Of Language Models with SVCCA

no code implementations NAACL 2019 Naomi Saphra, Adam Lopez

Research has shown that neural models implicitly encode linguistic features, but there has been no research showing \emph{how} these encodings arise as the models are trained.

Language Modelling

The problem with probabilistic DAG automata for semantic graphs

no code implementations NAACL 2019 Ieva Vasiljeva, Sorcha Gilroy, Adam Lopez

Semantic representations in the form of directed acyclic graphs (DAGs) have been introduced in recent years, and to model them, we need probabilistic models of DAGs.

Neural Networks for Cross-lingual Negation Scope Detection

no code implementations4 Oct 2018 Federico Fancellu, Adam Lopez, Bonnie Webber

Negation scope has been annotated in several English and Chinese corpora, and highly accurate models for this task in these languages have been learned from these annotations.

Cross-Lingual Word Embeddings Negation +1

Pre-training on high-resource speech recognition improves low-resource speech-to-text translation

1 code implementation NAACL 2019 Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, Sharon Goldwater

Finally, we show that the approach improves performance on a true low-resource task: pre-training on a combination of English ASR and French ASR improves Mboshi-French ST, where only 4 hours of data are available, from 3. 5 to 7. 1

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Indicatements that character language models learn English morpho-syntactic units and regularities

no code implementations31 Aug 2018 Yova Kementchedjhieva, Adam Lopez

Character language models have access to surface morphological patterns, but it is not clear whether or how they learn abstract morphological regularities.

Language Modelling

What do character-level models learn about morphology? The case of dependency parsing

no code implementations EMNLP 2018 Clara Vania, Andreas Grivas, Adam Lopez

When parsing morphologically-rich languages with neural models, it is beneficial to model input at the character level, and it has been claimed that this is because character-level models learn morphology.

Dependency Parsing Morphological Analysis

Does Ability Affect Alignment in Second Language Tutorial Dialogue?

no code implementations WS 2018 Arabella Sinclair, Adam Lopez, C. G. Lucas, Dragan Gasevic

We find that lexical priming in learner-tutor dialogues differs from that in conversational and task-based dialogues, and we find evidence that alignment increases with ability and with word complexity.

A Structured Syntax-Semantics Interface for English-AMR Alignment

1 code implementation NAACL 2018 Ida Szubert, Adam Lopez, Nathan Schneider

Abstract Meaning Representation (AMR) annotations are often assumed to closely mirror dependency syntax, but AMR explicitly does not require this, and the assumption has never been tested.

AMR Parsing

Low-Resource Speech-to-Text Translation

no code implementations24 Mar 2018 Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, Sharon Goldwater

We explore models trained on between 20 and 160 hours of data, and find that although models trained on less data have considerably lower BLEU scores, they can still predict words with relatively high precision and recall---around 50% for a model trained on 50 hours of data, versus around 60% for the full 160 hour model.

Machine Translation speech-recognition +3

Weighted DAG Automata for Semantic Graphs

no code implementations CL 2018 David Chiang, Frank Drewes, Daniel Gildea, Adam Lopez, Giorgio Satta

Graphs have a variety of uses in natural language processing, particularly as representations of linguistic meaning.

Spoken Term Discovery for Language Documentation using Translations

no code implementations WS 2017 Antonios Anastasopoulos, Sameer Bansal, David Chiang, Sharon Goldwater, Adam Lopez

Vast amounts of speech data collected for language documentation and research remain untranscribed and unsearchable, but often a small amount of speech may have text translations available.

Translation

A Generative Parser with a Discriminative Recognition Algorithm

1 code implementation ACL 2017 Jianpeng Cheng, Adam Lopez, Mirella Lapata

Generative models defining joint distributions over parse trees and sentences are useful for parsing and language modeling, but impose restrictions on the scope of features and are often outperformed by discriminative models.

Constituency Parsing Language Modelling +1

Parsing Graphs with Regular Graph Grammars

no code implementations SEMEVAL 2017 Sorcha Gilroy, Adam Lopez, Sebastian Maneth

Recently, several datasets have become available which represent natural language phenomena as graphs.

Machine Translation

From Characters to Words to in Between: Do We Capture Morphology?

no code implementations ACL 2017 Clara Vania, Adam Lopez

Words can be represented by composing the representations of subword units such as word segments, characters, and/or character n-grams.

Language Modelling

Universal Dependencies to Logical Form with Negation Scope

no code implementations WS 2017 Federico Fancellu, Siva Reddy, Adam Lopez, Bonnie Webber

Many language technology applications would benefit from the ability to represent negation and its scope on top of widely-used linguistic resources.

Negation

Detecting negation scope is easy, except when it isn't

no code implementations EACL 2017 Federico Fancellu, Adam Lopez, Bonnie Webber, Hangfeng He

Several corpora have been annotated with negation scope{---}the set of words whose meaning is negated by a cue like the word {``}not{''}{---}leading to the development of classifiers that detect negation scope with high accuracy.

Negation

Towards speech-to-text translation without speech recognition

no code implementations EACL 2017 Sameer Bansal, Herman Kamper, Adam Lopez, Sharon Goldwater

We explore the problem of translating speech to text in low-resource scenarios where neither automatic speech recognition (ASR) nor machine translation (MT) are available, but we have training data in the form of audio paired with text translations.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Universal Dependencies to Logical Forms with Negation Scope

1 code implementation10 Feb 2017 Federico Fancellu, Siva Reddy, Adam Lopez, Bonnie Webber

Many language technology applications would benefit from the ability to represent negation and its scope on top of widely-used linguistic resources.

Negation

Weakly supervised spoken term discovery using cross-lingual side information

no code implementations21 Sep 2016 Sameer Bansal, Herman Kamper, Sharon Goldwater, Adam Lopez

Recent work on unsupervised term discovery (UTD) aims to identify and cluster repeated word-like units from audio alone.

Evaluating Informal-Domain Word Representations With UrbanDictionary

1 code implementation WS 2016 Naomi Saphra, Adam Lopez

Existing corpora for intrinsic evaluation are not targeted towards tasks in informal domains such as Twitter or news comment forums.

Gappy Pattern Matching on GPUs for On-Demand Extraction of Hierarchical Translation Grammars

no code implementations TACL 2015 Hua He, Jimmy Lin, Adam Lopez

We believe that GPU-based extraction of hierarchical grammars is an attractive proposition, particularly for MT applications that demand high throughput.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.