no code implementations • NAACL (ACL) 2022 • Greta Tuckute, Aalok Sathe, Mingye Wang, Harley Yoder, Cory Shain, Evelina Fedorenko
The modular design of SentSpace allows researchersto easily integrate their own feature computation into the pipeline while benefiting from acommon framework for evaluation and visualization.
2 code implementations • 25 Sep 2022 • Cory Shain, William Schuler
Scientists often use observational time series data to study complex natural processes, but regression analyses often assume simplistic dynamics.
1 code implementation • ACL 2021 • Cory Shain
The human mind is a dynamical system, yet many analysis techniques used to study it are limited in their ability to capture the complex dynamics that may characterize mental processes.
no code implementations • COLING 2020 • Evan Jaffe, Cory Shain, William Schuler
Models of human sentence processing effort tend to focus on costs associated with retrieving structures and discourse referents from memory (memory-based) and/or on costs associated with anticipating upcoming words and structures based on contextual cues (expectation-based) (Levy, 2008).
1 code implementation • CONLL 2020 • Cory Shain, Micha Elsner
Classical accounts of child language learning invoke memory limits as a pressure to discover sparse, language-like representations of speech, while more recent proposals stress the importance of prediction for language learning.
no code implementations • NAACL 2019 • Cory Shain, Micha Elsner
In this paper, we deploy binary stochastic neural autoencoder networks as models of infant language learning in two typologically unrelated languages (Xitsonga and English).
1 code implementation • NAACL 2019 • Cory Shain
A number of psycholinguistic studies have factorially manipulated words{'} contextual predictabilities and corpus frequencies and shown separable effects of each on measures of human sentence processing, a pattern which has been used to support distinct mechanisms underlying prediction on the one hand and lexical retrieval on the other.
1 code implementation • EMNLP 2018 • Cory Shain, William Schuler
Researchers in computational psycholinguistics frequently use linear models to study time series data generated by human subjects.
no code implementations • WS 2017 • Taylor Mahler, Willy Cheung, Micha Elsner, David King, Marie-Catherine de Marneffe, Cory Shain, Symon Stevens-Guille, Michael White
This paper describes our {``}breaker{''} submission to the 2017 EMNLP {``}Build It Break It{''} shared task on sentiment analysis.
no code implementations • EMNLP 2017 • Micha Elsner, Cory Shain
We present the first unsupervised LSTM speech segmenter as a cognitive model of the acquisition of words from unsegmented input.
no code implementations • COLING 2016 • Cory Shain, William Bryce, Lifeng Jin, Victoria Krakovna, Finale Doshi-Velez, Timothy Miller, William Schuler, Lane Schwartz
This paper presents a new memory-bounded left-corner parsing model for unsupervised raw-text syntax induction, using unsupervised hierarchical hidden Markov models (UHHMM).
no code implementations • WS 2016 • Cory Shain, Marten Van Schijndel, Richard Futrell, Edward Gibson, William Schuler
Studies on the role of memory as a predictor of reading time latencies (1) differ in their predictions about when memory effects should occur in processing and (2) have had mixed results, with strong positive effects emerging from isolated constructed stimuli and weak or even negative effects emerging from naturally-occurring stimuli.