Search Results for author: Bradley Hauer

Found 29 papers, 1 papers with code

You Shall Know the Most Frequent Sense by the Company it Keeps

no code implementations21 Aug 2018 Bradley Hauer, Yixing Luan, Grzegorz Kondrak

Identification of the most frequent sense of a polysemous word is an important semantic task.

Translation

One Homonym per Translation

no code implementations17 Apr 2019 Bradley Hauer, Grzegorz Kondrak

The study of homonymy is vital to resolving fundamental problems in lexical semantics.

Translation

Cognate Projection for Low-Resource Inflection Generation

no code implementations WS 2019 Bradley Hauer, Amir Ahmad Habibi, Yixing Luan, Rashed Rubby Riyadh, Grzegorz Kondrak

We propose cognate projection as a method of crosslingual transfer for inflection generation in the context of the SIGMORPHON 2019 Shared Task.

Synonymy = Translational Equivalence

no code implementations28 Apr 2020 Bradley Hauer, Grzegorz Kondrak

Synonymy and translational equivalence are the relations of sameness of meaning within and across languages.

Low-Resource G2P and P2G Conversion with Synthetic Training Data

no code implementations WS 2020 Bradley Hauer, Amir Ahmad Habibi, Yixing Luan, Arnob Mallik, Grzegorz Kondrak

This paper presents the University of Alberta systems and results in the SIGMORPHON 2020 Task 1: Multilingual Grapheme-to-Phoneme Conversion.

Semi-Supervised and Unsupervised Sense Annotation via Translations

no code implementations RANLP 2021 Bradley Hauer, Grzegorz Kondrak, Yixing Luan, Arnob Mallik, Lili Mou

Our two unsupervised methods refine sense annotations produced by a knowledge-based WSD system via lexical translations in a parallel corpus.

Machine Translation Translation +1

One Sense per Translation

no code implementations10 Jun 2021 Bradley Hauer, Grzegorz Kondrak

Translations have been used in WSD as a source of knowledge, and even as a means of delimiting word senses.

Translation Word Sense Disambiguation

WiC = TSV = WSD: On the Equivalence of Three Semantic Tasks

no code implementations NAACL 2022 Bradley Hauer, Grzegorz Kondrak

The Word-in-Context (WiC) task has attracted considerable attention in the NLP community, as demonstrated by the popularity of the recent MCL-WiC SemEval shared task.

Word Sense Disambiguation

On Universal Colexifications

no code implementations EACL (GWC) 2021 Hongchang Bao, Bradley Hauer, Grzegorz Kondrak

Colexification occurs when two distinct concepts are lexified by the same word.

Homonymy and Polysemy Detection with Multilingual Information

no code implementations EACL (GWC) 2021 Amir Ahmad Habibi, Bradley Hauer, Grzegorz Kondrak

Deciding whether a semantically ambiguous word is homonymous or polysemous is equivalent to establishing whether it has any pair of senses that are semantically unrelated.

Translation

UAlberta at SemEval 2022 Task 2: Leveraging Glosses and Translations for Multilingual Idiomaticity Detection

no code implementations SemEval (NAACL) 2022 Bradley Hauer, Seeratpal Jaura, Talgat Omarov, Grzegorz Kondrak

Further hypothesizing that literal and idiomatic expressions translate differently, our second method translates an expression in context, and uses a lexical knowledge base to determine if the translation is literal.

Task 2 Translation

Lexical Resource Mapping via Translations

no code implementations LREC 2022 Hongchang Bao, Bradley Hauer, Grzegorz Kondrak

Aligning lexical resources that associate words with concepts in multiple languages increases the total amount of semantic information that can be leveraged for various NLP tasks.

Translation Word Sense Disambiguation

Dorabella Cipher as Musical Inspiration

no code implementations SMP (ICON) 2021 Bradley Hauer, Colin Choi, Abram Hindle, Scott Smallwood, Grzegorz Kondrak

The Dorabella cipher is an encrypted note of English composer Edward Elgar, which has defied decipherment attempts for more than a century.

Decipherment Position

Visually-Grounded Descriptions Improve Zero-Shot Image Classification

no code implementations5 Jun 2023 Michael Ogezi, Bradley Hauer, Grzegorz Kondrak

Language-vision models like CLIP have made significant progress in zero-shot vision tasks, such as zero-shot image classification (ZSIC).

Classification Image Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.