Search Results for author: David Mortensen

Found 11 papers, 6 papers with code

Transformed Protoform Reconstruction

1 code implementation4 Jul 2023 Young Min Kim, Kalvin Chang, Chenxuan Cui, David Mortensen

We update their model with the state-of-the-art seq2seq model: the Transformer.

PWESuite: Phonetic Word Embeddings and Tasks They Facilitate

1 code implementation5 Apr 2023 Vilém Zouhar, Kalvin Chang, Chenxuan Cui, Nathaniel Carlson, Nathaniel Robinson, Mrinmaya Sachan, David Mortensen

In this work, we develop several novel methods which leverage articulatory features to build phonetically informed word embeddings, and present a set of phonetic word embeddings to encourage their community development, evaluation and use.

Retrieval Word Embeddings

Mathematically Modeling the Lexicon Entropy of Emergent Language

1 code implementation28 Nov 2022 Brendon Boldt, David Mortensen

We formulate a stochastic process, FiLex, as a mathematical model of lexicon entropy in deep learning-based emergent language systems.

Modeling Emergent Lexicon Formation with a Self-Reinforcing Stochastic Process

1 code implementation22 Jun 2022 Brendon Boldt, David Mortensen

We introduce FiLex, a self-reinforcing stochastic process which models finite lexicons in emergent language experiments.

Recommendations for Systematic Research on Emergent Language

no code implementations22 Jun 2022 Brendon Boldt, David Mortensen

Emergent language is unique among fields within the discipline of machine learning for its open-endedness, not obviously presenting well-defined problems to be solved.

Quantifying Cognitive Factors in Lexical Decline

1 code implementation12 Oct 2021 David Francis, Ella Rabinovich, Farhan Samir, David Mortensen, Suzanne Stevenson

Specifically, we propose a variety of psycholinguistic factors -- semantic, distributional, and phonological -- that we hypothesize are predictive of lexical decline, in which words greatly decrease in frequency over time.

Polyglot Neural Language Models: A Case Study in Cross-Lingual Phonetic Representation Learning

no code implementations NAACL 2016 Yulia Tsvetkov, Sunayana Sitaram, Manaal Faruqui, Guillaume Lample, Patrick Littell, David Mortensen, Alan W. black, Lori Levin, Chris Dyer

We introduce polyglot language models, recurrent neural network models trained to predict symbol sequences in many different languages using shared representations of symbols and conditioning on typological information about the language to be predicted.

Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.