Search Results for author: Alexis Michaud

Found 10 papers, 2 papers with code

Contribuer au progr\`es solidaire des recherches et de la documentation : la Collection Pangloss et la Collection AuCo (Contributing to joint progress in documentation and research: some achievements and future perspectives of the Pangloss Collection and the AuCo Collection)

no code implementations JEPTALNRECITAL 2016 Alexis Michaud, S{\'e}verine Guillaume, Guillaume Jacques, {\DJ}{\u{a}}ng-Khoa Mạc, Michel Jacobson, Thu-H{\`a} Phạm, Matthew Deo

La pr{\'e}sente communication pr{\'e}sente les projets scientifiques et les r{\'e}alisations de deux collections h{\'e}berg{\'e}es par la plateforme de ressources orales Cocoon : la Collection Pangloss, qui concerne principalement des langues de tradition orale (sans {\'e}criture), du monde entier ; et la Collection AuCo, d{\'e}di{\'e}e aux langues du Vietnam et de pays voisins.

AlloVera: A Multilingual Allophone Database

no code implementations LREC 2020 David R. Mortensen, Xinjian Li, Patrick Littell, Alexis Michaud, Shruti Rijhwani, Antonios Anastasopoulos, Alan W. black, Florian Metze, Graham Neubig

While phonemic representations are language specific, phonetic representations (stated in terms of (allo)phones) are much closer to a universal (language-independent) transcription.

speech-recognition Speech Recognition

From `Snippet-lects' to Doculects and Dialects: Leveraging Neural Representations of Speech for Placing Audio Signals in a Language Landscape

no code implementations29 May 2023 Séverine Guillaume, Guillaume Wisniewski, Alexis Michaud

We use max-pooling to aggregate the neural representations from a "snippet-lect" (the speech in a 5-second audio snippet) to a "doculect" (the speech in a given resource), then to dialects and languages.

Establishing degrees of closeness between audio recordings along different dimensions using large-scale cross-lingual models

no code implementations8 Feb 2024 Maxime Fily, Guillaume Wisniewski, Severine Guillaume, Gilles Adda, Alexis Michaud

We propose a new unsupervised method using ABX tests on audio recordings with carefully curated metadata to shed light on the type of information present in the representations.

Cannot find the paper you are looking for? You can Submit a new open access paper.