Search Results for author: Takateru Yamakoshi

Found 4 papers, 3 papers with code

Probing BERT’s priors with serial reproduction chains

no code implementations Findings (ACL) 2022 Takateru Yamakoshi, Thomas Griffiths, Robert Hawkins

Sampling is a promising bottom-up method for exposing what generative models have learned about language, but it remains unclear how to generate representative samples from popular masked language models (MLMs) like BERT.

Causal interventions expose implicit situation models for commonsense language understanding

1 code implementation6 Jun 2023 Takateru Yamakoshi, James L. McClelland, Adele E. Goldberg, Robert D. Hawkins

Accounts of human language processing have long appealed to implicit ``situation models'' that enrich comprehension with relevant but unstated world knowledge.

World Knowledge

Probing BERT's priors with serial reproduction chains

1 code implementation24 Feb 2022 Takateru Yamakoshi, Thomas L. Griffiths, Robert D. Hawkins

Sampling is a promising bottom-up method for exposing what generative models have learned about language, but it remains unclear how to generate representative samples from popular masked language models (MLMs) like BERT.

Language Modelling Masked Language Modeling

Cannot find the paper you are looking for? You can Submit a new open access paper.