Search Results for author: Francis M. Tyers

Found 28 papers, 3 papers with code

Yet Another Format of Universal Dependencies for Korean

1 code implementation COLING 2022 Yige Chen, Eunkyul Leah Jo, Yundong Yao, Kyungtae Lim, Miikka Silfverberg, Francis M. Tyers, Jungyeul Park

In this study, we propose a morpheme-based scheme for Korean dependency parsing and adopt the proposed scheme to Universal Dependencies.

Dependency Parsing

Can LSTM Learn to Capture Agreement? The Case of Basque

no code implementations WS 2018 Shauli Ravfogel, Francis M. Tyers, Yoav Goldberg

We propose the Basque agreement prediction task as challenging benchmark for models that attempt to learn regularities in human language.

Sentence

A prototype dependency treebank for Breton

no code implementations JEPTALNRECITAL 2018 Francis M. Tyers, Vinit Ravishankar

This paper describes the development of the first syntactically-annotated corpus of Breton.

A bandit approach to curriculum generation for automatic speech recognition

no code implementations6 Feb 2021 Anastasia Kuznetsova, Anurag Kumar, Francis M. Tyers

The Automated Speech Recognition (ASR) task has been a challenging domain especially for low data scenarios with few audio examples.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

What shall we do with an hour of data? Speech recognition for the un- and under-served languages of Common Voice

no code implementations10 May 2021 Francis M. Tyers, Josh Meyer

This technical report describes the methods and results of a three-week sprint to produce deployable speech recognition models for 31 under-served languages of the Common Voice project.

speech-recognition Speech Recognition

UniMorph 4.0: Universal Morphology

no code implementations LREC 2022 Khuyagbaatar Batsuren, Omer Goldman, Salam Khalifa, Nizar Habash, Witold Kieraś, Gábor Bella, Brian Leonard, Garrett Nicolai, Kyle Gorman, Yustinus Ghanggo Ate, Maria Ryskina, Sabrina J. Mielke, Elena Budianskaya, Charbel El-Khaissi, Tiago Pimentel, Michael Gasser, William Lane, Mohit Raj, Matt Coler, Jaime Rafael Montoya Samame, Delio Siticonatzi Camaiteri, Benoît Sagot, Esaú Zumaeta Rojas, Didier López Francis, Arturo Oncevay, Juan López Bautista, Gema Celeste Silva Villegas, Lucas Torroba Hennigen, Adam Ek, David Guriel, Peter Dirix, Jean-Philippe Bernardy, Andrey Scherbakov, Aziyana Bayyr-ool, Antonios Anastasopoulos, Roberto Zariquiey, Karina Sheifer, Sofya Ganieva, Hilaria Cruz, Ritván Karahóǧa, Stella Markantonatou, George Pavlidis, Matvey Plugaryov, Elena Klyachko, Ali Salehi, Candy Angulo, Jatayu Baxi, Andrew Krizhanovsky, Natalia Krizhanovskaya, Elizabeth Salesky, Clara Vania, Sardana Ivanova, Jennifer White, Rowan Hall Maudslay, Josef Valvoda, Ran Zmigrod, Paula Czarnowska, Irene Nikkarinen, Aelita Salchak, Brijesh Bhatt, Christopher Straughn, Zoey Liu, Jonathan North Washington, Yuval Pinter, Duygu Ataman, Marcin Wolinski, Totok Suhardijanto, Anna Yablonskaya, Niklas Stoehr, Hossep Dolatian, Zahroh Nuriah, Shyam Ratan, Francis M. Tyers, Edoardo M. Ponti, Grant Aiton, Aryaman Arora, Richard J. Hatcher, Ritesh Kumar, Jeremiah Young, Daria Rodionova, Anastasia Yemelina, Taras Andrushko, Igor Marchenko, Polina Mashkovtseva, Alexandra Serova, Emily Prud'hommeaux, Maria Nepomniashchaya, Fausto Giunchiglia, Eleanor Chodroff, Mans Hulden, Miikka Silfverberg, Arya D. McCarthy, David Yarowsky, Ryan Cotterell, Reut Tsarfaty, Ekaterina Vylomova

The project comprises two major thrusts: a language-independent feature schema for rich morphological annotation and a type-level resource of annotated data in diverse languages realizing that schema.

Morphological Inflection

OmniLingo: Listening- and speaking-based language learning

no code implementations10 Oct 2023 Francis M. Tyers, Nicholas Howell

In this demo paper we present OmniLingo, an architecture for distributing data for listening- and speaking-based language learning applications and a demonstration client built using the architecture.

Cannot find the paper you are looking for? You can Submit a new open access paper.